Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Yes, No, Maybe So: Human Factors Considerations for Fostering Calibrated Trust in Foundation Models Under Uncertainty

Technical Report ·
DOI:https://doi.org/10.2172/2573046· OSTI ID:2573046

High-stakes analytical environments require analysts to evaluate evidence and generate conclusions to inform critical decisions often under conditions of uncertainty. Probabilistic decision-making based on incomplete or inaccurate information can reduce productivity, compromise national interests, and endanger public safety. Researchers are developing expert systems built on foundation models (FMs) to support analysts’ decision-making processes by enabling human-artificial intelligence (AI) teaming, in part through the quantification and expression of uncertainty information. As FMs continue to mature, it is imperative to correspondingly consider analysts’ needs for appropriately interpreting and using uncertainty information. However, prior research indicates that it remains unclear how analysts engage with FM-generated uncertainty information and the extent to which these interactions influence trust in, and reliance on, expert systems. We plan to review the state of the science and conduct an exploratory, qualitative study to (a) understand how properly communicated uncertainty can foster calibrated trust and appropriate reliance and (b) identify approaches for effectively conveying FM-generated uncertainty information during analytical workflows. We will administer semi-structured interviews with analysts from a specific high-stakes analytical environment to collect their current experiences with job-related uncertainty and their impressions when viewing FM-generated uncertainty information. During the interview protocol, participants will be presented with several different FM outputs and invited to discuss their thoughts and beliefs about the uncertainty information displayed. Participants may provide insights into how trust and reliance may be influenced by uncertainty. The results of this study will help us to better understand how analysts currently interpret and use uncertainty information. Our findings may inform human factors recommendations for effectively conveying uncertainty information to foster calibrated trust in, and appropriate reliance on, expert systems. Interaction designers and FM developers can use this knowledge to enhance human-AI teaming and ensure the responsible deployment of FM-based expert systems in analytical workflows.

Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA); USDOE Office of Science (SC), Office of Workforce Development for Teachers & Scientists (WDTS)
DOE Contract Number:
AC05-76RL01830
OSTI ID:
2573046
Report Number(s):
PNNL--37971
Country of Publication:
United States
Language:
English

Similar Records

Exploring the role of judgement and shared situation awareness when working with AI recommender systems
Journal Article · Fri Jul 26 00:00:00 EDT 2024 · Cognition, Technology & Work · OSTI ID:2406706

How Do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?
Conference · Wed Apr 01 00:00:00 EDT 2020 · OSTI ID:1616684

Characterizing Interaction Uncertainty in Human-Machine Teams
Conference · Wed Jun 19 00:00:00 EDT 2024 · OSTI ID:2426428