Hybrid-RL-MPC4CLR (Hybird-Reinforcement-Learning-Model-Predictive-Control-for-Reserve-Policy-Assisted-Critical-Load-Restoration-in-Distribution-Grids)

RESOURCE

Abstract

Hybrid-RL-MPC4CLR was developed as a hybrid controller for active distribution grid critical load restoration, combining deep reinforcement learning (RL) and model predictive control (MPC) aiming at maximizing total restored load following an extreme event. The RL determines a policy for quantifying operating reserve requirements, thereby hedging against uncertainty, while the MPC models grid operations incorporating the RL policy actions (i.e., reserve requirements), renewable (wind and solar) power predictions, and load demand forecasts. The developers formulated the reserve requirement determination problem as a sequential decision-making problem based on the Markov Decision Process (MDP) and design an RL learning environment based on the OpenAI Gym framework and MPC simulation. The RL agent reward and MPC objective function aim to maximize and monotonically increase total restored load and minimize load shedding and renewable power curtailment. The software is developed using various software packages in Python. The MPC's optimal power flow (OPF) model is implemented using the Pyomo package, the RL simulation environment is implemented using the MPC simulation with various scenarios of renewable energy and load demand profiles and power outage beginning times, based on the OpenAI Gym framework. The RL agent training is performed using the RLlib Ray package. The RL algorithm  More>>
Developers:
Eseye, Abinet Tesfaye [1] Zhang, Xiangyu [1] Knueven, Bernard [1] Reynolds, Matthew [1] Lui, Weijia [1] Jones, Wesley [1]
  1. National Renewable Energy Lab. (NREL), Golden, CO (United States)
Release Date:
2022-03-18
Project Type:
Open Source, Publicly Available Repository
Software Type:
Scientific
Programming Languages:
Jupyter Notebook
python
Licenses:
BSD 3-clause "New" or "Revised" License
Sponsoring Org.:
Code ID:
72315
Site Accession Number:
NREL SWR-22-25
Research Org.:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Country of Origin:
United States

RESOURCE

Citation Formats

Eseye, Abinet Tesfaye, Zhang, Xiangyu, Knueven, Bernard, Reynolds, Matthew, Lui, Weijia, and Jones, Wesley. Hybrid-RL-MPC4CLR (Hybird-Reinforcement-Learning-Model-Predictive-Control-for-Reserve-Policy-Assisted-Critical-Load-Restoration-in-Distribution-Grids). Computer Software. https://github.com/NREL/hybrid-rl-mpc4clr. USDOE Office of Electricity (OE). 18 Mar. 2022. Web. doi:10.11578/dc.20220919.4.
Eseye, Abinet Tesfaye, Zhang, Xiangyu, Knueven, Bernard, Reynolds, Matthew, Lui, Weijia, & Jones, Wesley. (2022, March 18). Hybrid-RL-MPC4CLR (Hybird-Reinforcement-Learning-Model-Predictive-Control-for-Reserve-Policy-Assisted-Critical-Load-Restoration-in-Distribution-Grids). [Computer software]. https://github.com/NREL/hybrid-rl-mpc4clr. https://doi.org/10.11578/dc.20220919.4.
Eseye, Abinet Tesfaye, Zhang, Xiangyu, Knueven, Bernard, Reynolds, Matthew, Lui, Weijia, and Jones, Wesley. "Hybrid-RL-MPC4CLR (Hybird-Reinforcement-Learning-Model-Predictive-Control-for-Reserve-Policy-Assisted-Critical-Load-Restoration-in-Distribution-Grids)." Computer software. March 18, 2022. https://github.com/NREL/hybrid-rl-mpc4clr. https://doi.org/10.11578/dc.20220919.4.
@misc{ doecode_72315,
title = {Hybrid-RL-MPC4CLR (Hybird-Reinforcement-Learning-Model-Predictive-Control-for-Reserve-Policy-Assisted-Critical-Load-Restoration-in-Distribution-Grids)},
author = {Eseye, Abinet Tesfaye and Zhang, Xiangyu and Knueven, Bernard and Reynolds, Matthew and Lui, Weijia and Jones, Wesley},
abstractNote = {Hybrid-RL-MPC4CLR was developed as a hybrid controller for active distribution grid critical load restoration, combining deep reinforcement learning (RL) and model predictive control (MPC) aiming at maximizing total restored load following an extreme event. The RL determines a policy for quantifying operating reserve requirements, thereby hedging against uncertainty, while the MPC models grid operations incorporating the RL policy actions (i.e., reserve requirements), renewable (wind and solar) power predictions, and load demand forecasts. The developers formulated the reserve requirement determination problem as a sequential decision-making problem based on the Markov Decision Process (MDP) and design an RL learning environment based on the OpenAI Gym framework and MPC simulation. The RL agent reward and MPC objective function aim to maximize and monotonically increase total restored load and minimize load shedding and renewable power curtailment. The software is developed using various software packages in Python. The MPC's optimal power flow (OPF) model is implemented using the Pyomo package, the RL simulation environment is implemented using the MPC simulation with various scenarios of renewable energy and load demand profiles and power outage beginning times, based on the OpenAI Gym framework. The RL agent training is performed using the RLlib Ray package. The RL algorithm is trained offline using historical forecasts of renewable generation and load demand profiles. Simulation analysis and performance tests are conducted using a modified IEEE 13-bus distribution test feeder containing wind turbine, photovoltaic, microturbine, and battery.},
doi = {10.11578/dc.20220919.4},
url = {https://doi.org/10.11578/dc.20220919.4},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20220919.4}},
year = {2022},
month = {mar}
}