Demystifying Cyberattacks: Potential for Securing Energy Systems With Explainable AI
Modernization of energy systems has led to in-creased interactions among multiple critical infrastructures and diverse stakeholders making the challenge of operational decision making more complex and at times beyond cognitive capabilities of human operators. The state-of-the-art machine learning and deep learning approaches show promise of supporting users with complex decision-making challenges, such as those occur-ring in our rapidly transforming cyber-physical energy systems. However, successful adoption of data-driven decision support technology for critical infrastructure will be dependent on the ability of these technologies to be trustworthy and contextu-ally interpretable. In this paper, we investigate the feasibility of implementing explainable artificial intelligence (XAI) for interpretable detection of cyberattacks in the energy system. Leveraging a proof-of-concept simulation use case of detection of a data falsification attack on a photovoltaic system using XGBoost algorithm, we demonstrate how Local Interpretable Model-Agnostic Explanations (LIME), a flavor XAI approach, can help provide contextual and actionable interpretation of cyberattack detection.
- Research Organization:
- National Renewable Energy Laboratory (NREL), Golden, CO (United States)
- Sponsoring Organization:
- USDOE National Renewable Energy Laboratory (NREL), Laboratory Directed Research and Development (LDRD) Program
- DOE Contract Number:
- AC36-08GO28308
- OSTI ID:
- 2425932
- Report Number(s):
- NREL/CP-5T00-90743; MainId:92521; UUID:2277e2ca-5436-4523-9b0d-b6447f2783b6; MainAdminId:73279
- Country of Publication:
- United States
- Language:
- English
Similar Records
Impact of cyberattacks on safety and stability of connected and automated vehicle platoons under lane changes
Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor