PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems [SWR-22-07]

RESOURCE

Abstract

NREL's PowerGridworld provides a modular simulation environment for training heterogenous, grid-aware, multi-agent reinforcement learning (RL) policies at scale. The package enables the user to create component gym environments that can be composed into more complex agents. For example, a grid interactive building environment can be created by composing together component environments each encapsulating the building, PV, and battery physics. These multi-component environments can then be combined into multi-agent simulation where each agent's power consumption/injection becomes an input for solving the optimal power flow on a distribution feeder modeled in OpenDSS. Information from OpenDSS, such as bus voltages and line flows, can be included in the agents' observation spaces to enable grid-aware rewards. The default API for the PowerGridworld simulator conforms to RLLib's MultiAgent API and thus enables distributed training using HPC and cloud resources.
Developers:
Biagioni, David [1] Chintala, Rohit [1] Zhang, Xiangyu [1] Zamzam, Ahmed [1] King, Jennifer [1] Vaidhynathan, Deepthi [1] Wald, Dylan [1]
  1. National Renewable Energy Lab. (NREL), Golden, CO (United States)
Release Date:
2021-10-29
Project Type:
Open Source, Publicly Available Repository
Software Type:
Scientific
Licenses:
BSD 3-clause "New" or "Revised" License
Sponsoring Org.:
Code ID:
66827
Site Accession Number:
SWR-22-07
Research Org.:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Country of Origin:
United States

RESOURCE

Citation Formats

Biagioni, David, Chintala, Rohit, Zhang, Xiangyu, Zamzam, Ahmed, King, Jennifer, Vaidhynathan, Deepthi, and Wald, Dylan. PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems [SWR-22-07]. Computer Software. https://github.com/NREL/PowerGridworld. USDOE Laboratory Directed Research and Development (LDRD) Program. 29 Oct. 2021. Web. doi:10.11578/dc.20211110.1.
Biagioni, David, Chintala, Rohit, Zhang, Xiangyu, Zamzam, Ahmed, King, Jennifer, Vaidhynathan, Deepthi, & Wald, Dylan. (2021, October 29). PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems [SWR-22-07]. [Computer software]. https://github.com/NREL/PowerGridworld. https://doi.org/10.11578/dc.20211110.1.
Biagioni, David, Chintala, Rohit, Zhang, Xiangyu, Zamzam, Ahmed, King, Jennifer, Vaidhynathan, Deepthi, and Wald, Dylan. "PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems [SWR-22-07]." Computer software. October 29, 2021. https://github.com/NREL/PowerGridworld. https://doi.org/10.11578/dc.20211110.1.
@misc{ doecode_66827,
title = {PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems [SWR-22-07]},
author = {Biagioni, David and Chintala, Rohit and Zhang, Xiangyu and Zamzam, Ahmed and King, Jennifer and Vaidhynathan, Deepthi and Wald, Dylan},
abstractNote = {NREL's PowerGridworld provides a modular simulation environment for training heterogenous, grid-aware, multi-agent reinforcement learning (RL) policies at scale. The package enables the user to create component gym environments that can be composed into more complex agents. For example, a grid interactive building environment can be created by composing together component environments each encapsulating the building, PV, and battery physics. These multi-component environments can then be combined into multi-agent simulation where each agent's power consumption/injection becomes an input for solving the optimal power flow on a distribution feeder modeled in OpenDSS. Information from OpenDSS, such as bus voltages and line flows, can be included in the agents' observation spaces to enable grid-aware rewards. The default API for the PowerGridworld simulator conforms to RLLib's MultiAgent API and thus enables distributed training using HPC and cloud resources.},
doi = {10.11578/dc.20211110.1},
url = {https://doi.org/10.11578/dc.20211110.1},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20211110.1}},
year = {2021},
month = {oct}
}