Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

How Do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?

Conference ·
 [1];  [1];  [2];  [1]
  1. BATTELLE (PACIFIC NW LAB)
  2. National Institute of Standards and Technology

We investigated the effects of different visual explanations on users' trust in machine learning classification. We proposed three forms of visual explanations of a classification based on identifying relevant training instances. We conducted a user study to evaluate these visual explanations as well as a no explanation condition. We measured users' trust of a classifier, quantified the effects of these three forms of explanations, and assessed the changes in users' trust. We found that participants trust a classifier appropriately when an explanation is available. The combination of human, classification algorithm and understandable explanation makes better decisions than the classifier and human alone.This work advances the state-of-the-art closer to building trust-able machine learning models and informs the design and appropriate use of automated systems.

Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-76RL01830
OSTI ID:
1616684
Report Number(s):
PNNL-SA-138276
Country of Publication:
United States
Language:
English

Similar Records

Machine learning model explanation apparatus and methods
Patent · Tue Oct 24 00:00:00 EDT 2023 · OSTI ID:2293768

Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations
Conference · Sun May 14 00:00:00 EDT 2017 · OSTI ID:1358512

A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations
Conference · Sun Oct 01 00:00:00 EDT 2017 · OSTI ID:1501525

Related Subjects