Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

RLC4CLR (Reinforcement Learning Controller for Critical Load Restoration Problems)

Software ·
DOI:https://doi.org/10.11578/dc.20220919.5· OSTI ID:code-72324 · Code ID:72324

RLC4CLR demonstrates using a reinforcement learning controller (RLC) to solve a critical load restoration (CLR) problem, which improves the grid resilience after a substation outage event. RLC4CLR consists of two parts. (1) RL environment: This environment encapsulates the CLR problem to be solved and provides interfacing functions to follow the standard OpenAI Gym format. A power system simulator, i.e., OpenDSS, is included to provide the power flow solution. Controller inputs and outputs (RL state and action) as well as the reward are defined in this environment as well. In summary, the RL environment is the problem formulation from which the RL agent can learn. (2) RL training script: The training script enables the RL agent to learn its control policy by interacting with the RL environment. For RL training, an open-sourced RL library, i.e., RLlib, is leveraged which is based on a distributed computing framework (Ray). The training script is designed to be able to be run on both local machine or the NREL HPC system. Other components of RLC4CLR include input data, e.g., grid model (standard IEEE test feeders), and other files used for results analysis.

Short Name / Acronym:
RLC4CLR
Site Accession Number:
NREL SWR-22-27
Software Type:
Scientific
License(s):
BSD 3-clause "New" or "Revised" License
Programming Language(s):
Python
Research Organization:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Sponsoring Organization:
USDOE Office of Electricity (OE)

Primary Award/Contract Number:
AC36-08GO28308
DOE Contract Number:
AC36-08GO28308
Code ID:
72324
OSTI ID:
code-72324
Country of Origin:
United States