Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A Hybrid Reinforcement Learning-MPC Approach for Distribution System Critical Load Restoration: Preprint

Conference ·
OSTI ID:1844201

This paper proposes a hybrid control approach for distribution system critical load restoration, combining deep reinforcement learning (RL) and model predictive control (MPC) aiming at maximizing total restored load following an extreme event. RL determines a policy for quantifying operating reserve requirements, thereby hedging against uncertainty, while MPC models grid operations incorporating RL policy actions, i.e., the reserve requirement, renewable (wind and solar) power predictions, and load demand forecasts. We formulate the reserve requirement determination problem as a sequential decision making problem based on the Markov Decision Process (MDP) and design an RL learning environment based on the OpenAI Gym framework and MPC. The RL agent reward and MPC objective function aim to maximize and monotonically increase total restored load and minimize load shedding and renewable power curtailment. The RL algorithm is trained off-line using historical forecast of renewable generation and load demand. The method is tested using a modified IEEE 13-bus distribution test feeder containing wind turbine, photovoltaic, microturbine and battery. Case studies demonstrated that the proposed method outperforms other operating reserve determination methods.

Research Organization:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Sponsoring Organization:
USDOE Office of Electricity Delivery and Energy Reliability (OE)
DOE Contract Number:
AC36-08GO28308
OSTI ID:
1844201
Report Number(s):
NREL/CP-2C00-81440; MainId:82213; UUID:f82dac99-9357-40d2-9306-30e8a764897d; MainAdminID:63795
Country of Publication:
United States
Language:
English