Deep Reinforcement Learning for Distribution System Operations: A Tutorial and Survey
Journal Article
·
· Proceedings of the IEEE
- Washington State Univ., Pullman, WA (United States)
- National Renewable Energy Laboratory (NREL), Golden, CO (United States)
- Eversource Energy, Manchester, NH (United States)
Here, the rapid evolution of modern electric power distribution systems into complex networks of interconnected active devices, distributed generation (DG), and storage poses increasing difficulties for system operators. The large-scale integration of distributed energy resources (DERs) and the rapid exchange of measurement data via communication networks present major opportunities for advancing grid operations but also introduce greater uncertainty, higher data dimensionality, more complex network and device models, and challenging control and optimization problems. Deep reinforcement learning (DRL) algorithms are promising in addressing these challenges. However, they have not been effectively adapted for power systems applications, requiring extensive customization for implementation and evaluation. This has resulted in reproducibility challenges and a steep learning curve for researchers new to applying DRL algorithms to the power systems domain. To bridge these gaps, this tutorial aims to serve as a valuable resource for researchers interested in exploring learning-based algorithms to operate active power distribution networks. Specifically, this work presents a generalized process for translating sequential decision-making problems in power distribution systems into Markov decision process (MDP) formulations, illustrated through concrete grid service examples. Additionally, we introduce a simple environment design strategy to develop and evaluate example DRL algorithms for distribution system applications, complete with an included code repository to guide users through environment construction.
- Research Organization:
- National Renewable Energy Laboratory (NREL), Golden, CO (United States)
- Sponsoring Organization:
- National Science Foundation (NSF); USDOE
- Grant/Contract Number:
- AC36-08GO28308
- OSTI ID:
- 2588446
- Report Number(s):
- NREL/JA--6A40-97039
- Journal Information:
- Proceedings of the IEEE, Journal Name: Proceedings of the IEEE Journal Issue: 6 Vol. 113; ISSN 1558-2256; ISSN 0018-9219
- Publisher:
- Institute of Electrical and Electronics EngineersCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Federated Deep Reinforcement Learning for Decentralized VVO of BTM DERs
Enhanced Oblique Decision Tree Enabled Policy Extraction for Deep Reinforcement Learning in Power System Emergency Control
Distributional Deep Reinforcement Learning-Based Emergency Frequency Control
Conference
·
Tue Oct 01 00:00:00 EDT 2024
·
OSTI ID:2477510
Enhanced Oblique Decision Tree Enabled Policy Extraction for Deep Reinforcement Learning in Power System Emergency Control
Journal Article
·
Sat Apr 23 00:00:00 EDT 2022
· Electric Power Systems Research
·
OSTI ID:1869799
Distributional Deep Reinforcement Learning-Based Emergency Frequency Control
Journal Article
·
Tue Nov 23 19:00:00 EST 2021
· IEEE Transactions on Power Systems
·
OSTI ID:1845017