Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Planning for Resilient Power Distribution Systems using Risk-Based Quantification and Q-Learning

Conference ·
 [1];  [2];  [3]
  1. Washington State University
  2. BATTELLE (PACIFIC NW LAB)
  3. WASHINGTON STATE UNIV
Grid hardening is one of the most effective approaches that reduce the component failures and restoration efforts thus increasing the resilience of the power systems against extreme events. However, hardening and upgrading the entire system is prohibitively expensive and hence the optimal design of a distribution network is challenging. This paper adopted a reinforcement learning algorithm to identify the optimal hardening strategy to enhance the resilience of power distribution systems. Adopting the Q-learning algorithm as the reinforcement learning technique, we found the sequential optimal action for hardening measures to enhance the grid's resilience for the given budget. To identify the optimal strategy through Q-learning, Conditional Value at Risk (CVaR) is used as a rewarding metric. A study on the IEEE 123-bus test feeder validate the effectiveness of the proposed model and show how to effectively allocate budget limited resources to plan a resilient power distribution network.
Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-76RL01830
OSTI ID:
1844605
Report Number(s):
PNNL-SA-157696
Country of Publication:
United States
Language:
English

Similar Records

ARM-IRL: Adaptive Resilience Metric Quantification Using Inverse Reinforcement Learning
Journal Article · Sat May 17 20:00:00 EDT 2025 · AI · OSTI ID:2573884

Network Reconfiguration for Enhanced Operational Resilience Using Reinforcement Learning
Conference · Wed Sep 28 00:00:00 EDT 2022 · OSTI ID:1900016

Related Subjects