Questionnaires for eliciting evaluation data from users of interactive question answering
Evaluating interactive question answering (QA) systems with real users can be challenging because traditional evaluation measures based on the relevance of items returned are difficult to employ since relevance judgments can be unstable in multi-user evaluations. The work reported in this paper evaluates, in distinguishing among a set of interactive QA systems, the effectiveness of three questionnaires: a Cognitive Workload Questionnaire (NASA TLX), and Task and System Questionnaires customized to a specific interactive QA application. These Questionnaires were evaluated with four systems, seven analysts, and eight scenarios during a 2-week workshop. Overall, results demonstrate that all three Questionnaires are effective at distinguishing among systems, with the Task Questionnaire being the most sensitive. Results also provide initial support for the validity and reliability of the Questionnaires.
- Research Organization:
- Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1028094
- Report Number(s):
- PNNL-SA-73804; 400904120; TRN: US201121%%728
- Journal Information:
- Natural Language Engineering, Vol. 15, Issue 1
- Country of Publication:
- United States
- Language:
- English
Similar Records
From Question Answering to Visual Exploration
Quantitative Assessment of Workload and Stressors in Clinical Radiation Oncology