Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A Hybrid Reinforcement Learning-MPC Approach for Distribution System Critical Load Restoration

Conference ·
OSTI ID:1878557

This paper proposes a hybrid control approach for distribution system critical load restoration, combining deep reinforcement learning (RL) and model predictive control (MPC) aiming at maximizing total restored load following an extreme event. RL determines a policy for quantifying operating reserve requirements, thereby hedging against uncertainty, while MPC models grid operations incorporating RL policy actions (i.e., reserve requirements), renewable (wind and solar) power predictions, and load demand forecasts. We formulate the reserve requirement determination problem as a sequential decision-making problem based on the Markov Decision Process (MDP) and design an RL learning environment based on the OpenAI Gym framework and MPC simulation. The RL agent reward and MPC objective function aim to maximize and monotonically increase total restored load and minimize load shedding and renewable power curtailment. The RL algorithm is trained offline using a historical forecast of renewable generation and load demand. The method is tested using a modified IEEE 13-bus distribution test feeder containing wind turbine, photovoltaic, microturbine, and battery. Case studies demonstrated that the proposed method outperforms other policies with static operating reserves.

Research Organization:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Sponsoring Organization:
USDOE Office of Electricity, Advanced Grid Modeling (AGM) Program
DOE Contract Number:
AC36-08GO28308
OSTI ID:
1878557
Report Number(s):
NREL/PO-2C00-83354; MainId:84127; UUID:c880a242-8fa6-4db8-a414-d4989b18cb2c; MainAdminID:64858
Country of Publication:
United States
Language:
English