How Do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?
- BATTELLE (PACIFIC NW LAB)
- National Institute of Standards and Technology
We investigated the effects of different visual explanations on users' trust in machine learning classification. We proposed three forms of visual explanations of a classification based on identifying relevant training instances. We conducted a user study to evaluate these visual explanations as well as a no explanation condition. We measured users' trust of a classifier, quantified the effects of these three forms of explanations, and assessed the changes in users' trust. We found that participants trust a classifier appropriately when an explanation is available. The combination of human, classification algorithm and understandable explanation makes better decisions than the classifier and human alone.This work advances the state-of-the-art closer to building trust-able machine learning models and informs the design and appropriate use of automated systems.
- Research Organization:
- Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1616684
- Report Number(s):
- PNNL-SA-138276
- Country of Publication:
- United States
- Language:
- English
Similar Records
Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations
A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations