In this study, we investigated strategies to address trust issues arising from errors in large language models (LLMs). The study examined the impact of confidence scores, system capability explanations, and user feedback on trust restoration post-error. 68 participants viewed the responses of an LLM to 20 general trivia questions, with an error introduced on the third trial. Each participant was presented with one mitigation strategy. Participants rated their overall trust in the model and the reliability of the answer. Results showed an immediate drop in trust after the error; however, there were no differences across the three strategies in trust recovery. Further, all conditions had a logarithmic trend in trust recovery following error. Differences in overall trust were predicted by perceived reliability of the answer, suggesting that participants were evaluating results critically and using that to inform their trust in the model. Qualitative data supported this finding; participants expressed lasting distrust despite the LLM’s later accuracy. Results showcase the need to prioritize accuracy in LLM deployment, because early errors may irrevocably damage user trust calibration and later adoption.
Martell, Max J., et al. "Mitigative Strategies for Recovering From Large Language Model Trust Violations." Journal of Cognitive Engineering and Decision Making, vol. 19, no. 1, Dec. 2024. https://doi.org/10.1177/15553434241303577
Martell, Max J., Baweja, Jessica A., & Dreslin, Brandon D. (2024). Mitigative Strategies for Recovering From Large Language Model Trust Violations. Journal of Cognitive Engineering and Decision Making, 19(1). https://doi.org/10.1177/15553434241303577
Martell, Max J., Baweja, Jessica A., and Dreslin, Brandon D., "Mitigative Strategies for Recovering From Large Language Model Trust Violations," Journal of Cognitive Engineering and Decision Making 19, no. 1 (2024), https://doi.org/10.1177/15553434241303577
@article{osti_2560812,
author = {Martell, Max J. and Baweja, Jessica A. and Dreslin, Brandon D.},
title = {Mitigative Strategies for Recovering From Large Language Model Trust Violations},
annote = {In this study, we investigated strategies to address trust issues arising from errors in large language models (LLMs). The study examined the impact of confidence scores, system capability explanations, and user feedback on trust restoration post-error. 68 participants viewed the responses of an LLM to 20 general trivia questions, with an error introduced on the third trial. Each participant was presented with one mitigation strategy. Participants rated their overall trust in the model and the reliability of the answer. Results showed an immediate drop in trust after the error; however, there were no differences across the three strategies in trust recovery. Further, all conditions had a logarithmic trend in trust recovery following error. Differences in overall trust were predicted by perceived reliability of the answer, suggesting that participants were evaluating results critically and using that to inform their trust in the model. Qualitative data supported this finding; participants expressed lasting distrust despite the LLM’s later accuracy. Results showcase the need to prioritize accuracy in LLM deployment, because early errors may irrevocably damage user trust calibration and later adoption.},
doi = {10.1177/15553434241303577},
url = {https://www.osti.gov/biblio/2560812},
journal = {Journal of Cognitive Engineering and Decision Making},
issn = {ISSN 1555-3434},
number = {1},
volume = {19},
place = {United States},
publisher = {Sage Publications},
year = {2024},
month = {12}}
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE Office of Science (SC), Office of Workforce Development for Teachers & Scientists (WDTS)
Grant/Contract Number:
AC05-76RL01830
OSTI ID:
2560812
Alternate ID(s):
OSTI ID: 2479660
Report Number(s):
PNNL-SA--193711
Journal Information:
Journal of Cognitive Engineering and Decision Making, Journal Name: Journal of Cognitive Engineering and Decision Making Journal Issue: 1 Vol. 19; ISSN 1555-3434