skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Explainable Artificial Intelligence Technology for Predictive Maintenance

Technical Report ·
DOI:https://doi.org/10.2172/1998555· OSTI ID:1998555

The domestic nuclear power plant fleet has relied on labor-intensive and time-consuming preventive maintenance programs, thus driving up operation and maintenance costs to achieve high-capacity factors. Artificial intelligence and machine learning can help simplify complex problems, such as diagnosing equipment degradation, to enable more effective decision-making. Benefits will be felt not only within existing analog and digital instrumentation and control, but also work processes, the integration of people with technology, and most importantly, the business case. Together, these hold promise to make nuclear power more efficient and reduce costs associated with operation and maintenance. While the artificial intelligence and machine learning technologies hold significant promise in the nuclear industry, there are challenges or barriers to their adoption. This report outlines the those different machine learning adoption barriers (categorized as historical, technical, economic, regulatory, and user) that the industry must overcome to realize the full benefits of artificial intelligence and machine learning capabilities for long-term economic sustainability. This report also provides solutions for some of these barriers by focusing on improving the explainability of machine learning to encourage trust from the end-user. Trust and explainability are essential to machine learning adoption. This report focuses on research-developed solutions to some of these barriers while analyzing a non-safety-related system, namely the circulating water system. This system frequently experiences waterbox fouling which our models preemptively diagnoses then explains to the operator how those conclusions were reached. This report presents and discusses the inherent trade-off between machine learning performance (in terms of accuracy) and explainability, where highly accurate machine learning methods (such as deep-learning) are the least explainable, and the most explainable methods (such as decision trees) are the least accurate. In addition, explainability of artificial intelligence techniques in terms of transparency and post-hoc metrics are discussed. This report outlines the importance of data novelty and value of new information in evaluating both the explainability and trustworthiness. Novelty detection helps to establish consistency or inconsistency of the new data with respect to the training data. On the other hand, value of information could be a part of the user-centric visualization recommendation system that request additional information to be collected, thereby strengthening the machine learning outcomes. During this project, a copyrighted user-centric visualization that aligns with a human-in-the-loop approach was developed. The user-centric visualization presents different levels of information and can be tailored as per user credentials to gain user confidence. One of the salient features of the user-centric visualization is it presents machine learning methods with explainability metrics. A simplified version of the user-centric visualization was presented to 32 users with varying levels of machine learning expertise. Feedback was solicited to test the hypothesis that the app contained sufficient explainability and that the users would trust the algorithm. Overall, the app was positively received, and the hypothesis was supported. This report discusses the trust-but-verify framework – a potential approach to build user trust artificial intelligence. The framework discusses trust from the human level to artificial intelligence level. The fundamental premise of the trust but verify framework is derived from an observation of nuclear safety culture (i.e., nuclear power plant personnel do not rely on a singular source of data to make a decision). This also ties back to the user-centric visualization that presents different levels of information to achieve both explainability and trustworthiness of artificial intelligence. Even so, the adoption of artificial intelligence and machine learning in the nuclear industry faces additional barriers, namely regulatory and stakeholder readiness. To overcome these challenges, new solutions must gain regulatory approval and cater to stakeholder needs. The Nuclear Regulatory Committee has a 5-year strategic plan which prepares them for reviewing artificial intelligence technologies in licensee submissions. Early and frequent engagement with the regulator is encouraged. Additionally, artificial intelligence solutions should incorporate human-in-the-loop considerations and offer explainability. Stakeholders must prepare by hiring or training staff to adapt to advancing technology in everyday plant tasks.

Research Organization:
Idaho National Laboratory (INL), Idaho Falls, ID (United States)
Sponsoring Organization:
USDOE Office of Nuclear Energy (NE)
DOE Contract Number:
AC07-05ID14517
OSTI ID:
1998555
Report Number(s):
INL/RPT-23-74159-Rev000; TRN: US2404679
Country of Publication:
United States
Language:
English