Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Distributed Reinforcement Learning with ADMM-RL

Conference ·
This paper presents a new algorithm for distributed Reinforcement Learning (RL). RL is an artificial intelligence (AI) control strategy such that controls for highly nonlinear systems over multi-step time horizons may be learned by experience, rather than directly computed on the fly by optimization. Here we introduce ADMM-RL, a combination of the Alternating Direction Method of Multipliers (ADMM) and reinforcement learning that allows for integrating learned controllers as subsystems in generally convergent distributed control applications. ADMM has become the workhorse algorithm for distributed control, combining the advantages of dual decomposition (namely, enabling decoupled, parallel, distributed solution) with the advantages of the method of multipliers (namely, convexification/stability). Our ADMM-RL algorithm replaces one or more of the subproblems in ADMM with several steps of RL. When the nested iterations converge, we are left with a pretrained subsolver that can potentially increase the efficiency of the deployed distributed controller by orders of magnitude. We illustrate ADMM-RL in both distributed wind farm yaw control and distributed grid-aware demand aggregation for water heaters.
Research Organization:
National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Sponsoring Organization:
USDOE Office of Energy Efficiency and Renewable Energy (EERE), Wind Energy Technologies Office (EE-4W)
DOE Contract Number:
AC36-08GO28308
OSTI ID:
1669404
Report Number(s):
NREL/CP-2C00-72690; MainId:6424; UUID:7353fd0f-9ed8-e811-9c19-ac162d87dfe5; MainAdminID:13455
Country of Publication:
United States
Language:
English