Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Demystifying Cyberattacks: Potential for Securing Energy Systems With Explainable AI

Conference ·

Modernization of energy systems has led to in-creased interactions among multiple critical infrastructures and diverse stakeholders making the challenge of operational decision making more complex and at times beyond cognitive capabilities of human operators. The state-of-the-art machine learning and deep learning approaches show promise of supporting users with complex decision-making challenges, such as those occur-ring in our rapidly transforming cyber-physical energy systems. However, successful adoption of data-driven decision support technology for critical infrastructure will be dependent on the ability of these technologies to be trustworthy and contextu-ally interpretable. In this paper, we investigate the feasibility of implementing explainable artificial intelligence (XAI) for interpretable detection of cyberattacks in the energy system. Leveraging a proof-of-concept simulation use case of detection of a data falsification attack on a photovoltaic system using XGBoost algorithm, we demonstrate how Local Interpretable Model-Agnostic Explanations (LIME), a flavor XAI approach, can help provide contextual and actionable interpretation of cyberattack detection.

Research Organization:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Sponsoring Organization:
USDOE National Renewable Energy Laboratory (NREL), Laboratory Directed Research and Development (LDRD) Program
DOE Contract Number:
AC36-08GO28308
OSTI ID:
2425932
Report Number(s):
NREL/CP-5T00-90743; MainId:92521; UUID:2277e2ca-5436-4523-9b0d-b6447f2783b6; MainAdminId:73279
Country of Publication:
United States
Language:
English

References (15)

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) journal January 2018
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022) journal November 2022
A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods conference June 2022
A Systematic Review of Human–Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques journal January 2021
Hardware-in-the-Loop Evaluation of Grid-Edge DER Chip Integration Into Next-Generation Smart Meters conference October 2023
The possibilities and limits of XAI in education: a socio-technical perspective journal March 2023
A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts journal January 2023
A Review of Trustworthy and Explainable Artificial Intelligence (XAI) journal January 2023
“Why Should I Trust You?”: Explaining the Predictions of Any Classifier
  • Ribeiro, Marco; Singh, Sameer; Guestrin, Carlos
  • Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations https://doi.org/10.18653/v1/N16-3020
conference January 2016
A Review of Methods for Detecting Point Anomalies on Numerical Dataset conference June 2020
False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers conference October 2023
Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities journal August 2022
Transparency and the Black Box Problem: Why We Do Not Trust AI journal September 2021
Explainable AI in Aerospace for Enhanced System Performance conference October 2021
Study of Communication Boundaries of Primal-Dual-Based Distributed Energy Resource Management Systems (DERMS) conference January 2023