Multi-Agent Safe Policy Learning for Power Management of Networked Microgrids
- Iowa State Univ., Ames, IA (United States); Iowa State University
- Iowa State Univ., Ames, IA (United States)
- Argonne National Lab. (ANL), Lemont, IL (United States)
This paper presents a supervised multi-agent safe policy learning (SMAS-PL) method for optimal power management of networked microgrids (MGs) in distribution systems. While unconstrained reinforcement learning (RL) algorithms are black-box decision models that could fail to satisfy grid operational constraints, our proposed method considers AC power flow equations and other operational limits. Accordingly, the training process employs the gradient information of operational constraints to ensure that the optimal control policy functions generate safe and feasible decisions. Furthermore, we have developed a distributed consensus-based optimization approach to train the agents’ policy functions while maintaining MGs’ privacy and data ownership boundaries. After training, the learned optimal policy functions can be safely used by the MGs to dispatch their local resources, without the need to solve a complex optimization problem from scratch. Lastly, numerical experiments have been devised to verify the performance of the proposed method.
- Research Organization:
- Iowa State Univ., Ames, IA (United States)
- Sponsoring Organization:
- USDOE Office of Energy Efficiency and Renewable Energy (EERE), Renewable Power Office. Wind Energy Technologies Office
- Grant/Contract Number:
- EE0008956; AC02-06CH11357
- OSTI ID:
- 1765351
- Alternate ID(s):
- OSTI ID: 1776789
- Journal Information:
- IEEE Transactions on Smart Grid, Journal Name: IEEE Transactions on Smart Grid Journal Issue: 2 Vol. 12; ISSN 1949-3053
- Publisher:
- IEEECopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Safe Reinforcement Learning-Based Transient Stability Control for Islanded Microgrids With Topology Reconfiguration