Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A Reinforcement Learning Approach to Parameter Selection for Distributed Optimal Power Flow

Conference ·
With the increasing penetration of distributed energy resources, distributed optimization algorithms have attracted significant attention for power systems applications due to their potential for superior scalability, privacy, and robustness to a single point-of-failure. The Alternating Direction Method of Multipliers (ADMM) is a popular distributed optimization algorithm; however, its convergence performance is highly dependent on the selection of penalty parameters, which are usually chosen heuristically. In this work, we use reinforcement learning (RL) to develop an adaptive penalty parameter selection policy for alternating current optimal power flow (ACOPF) problem solved via ADMM with the goal of minimizing the number of iterations until convergence. We train our RL policy using deep Q-learning and show that this policy can result in significantly accelerated convergence (up to a 59% reduction in the number of iterations compared to existing, curvatureinformed penalty parameter selection methods). Furthermore, we show that our RL policy demonstrates promise for generalizability, performing well under unseen loading schemes as well as under unseen losses of lines and generators (up to a 50% reduction in iterations). This work thus provides a proof-of-concept for using RL for parameter selection in ADMM for power systems applications.
Research Organization:
Argonne National Laboratory (ANL)
Sponsoring Organization:
Argonne National Laboratory - Laboratory Directed Research and Development (LDRD)
DOE Contract Number:
AC02-06CH11357
OSTI ID:
2324947
Country of Publication:
United States
Language:
English

References (20)

MATPOWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education journal February 2011
Learning-Accelerated ADMM for Distributed DC Optimal Power Flow journal January 2022
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems journal November 2017
Component-based dual decomposition methods for the OPF problem journal December 2018
Recent Developments in Machine Learning for Energy Systems Reliability Management journal September 2020
A Stochastic Approximation Method journal September 1951
From AlphaGo to Power System AI: What Engineers Can Learn from Solving the Most Complex Board Game journal March 2018
Alternating Direction Method with Self-Adaptive Penalty Parameters for Monotone Variational Inequalities journal August 2000
Adaptive ADMM for Distributed AC Optimal Power Flow journal May 2019
A Privacy-Preserving Distributed Control of Optimal Power Flow journal May 2022
Toward Distributed OPF Using ALADIN journal January 2019
Autonomous Energy Grids: Controlling the Future Grid With Large Amounts of Distributed Energy Resources journal November 2020
Distributed optimization approaches for emerging power systems operation: A review journal March 2017
A Two-Level ADMM Algorithm for AC OPF With Global Convergence Guarantees journal November 2021
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers book January 2011
Distributed Optimal Power Flow Using ADMM journal September 2014
Q-learning journal May 1992
Toward Distributed/Decentralized DC Optimal Power Flow Implementation in Future Electric Power Systems journal July 2018
Fully Decentralized Optimal Power Flow of Multi-Area Interconnected Power Systems Based on Distributed Interior Point Method journal January 2018
Strong NP-hardness of AC power flows feasibility journal November 2019

Similar Records

Distributed Reinforcement Learning with ADMM-RL
Conference · Thu Aug 29 00:00:00 EDT 2019 · OSTI ID:1669404

Robust and Simple ADMM Penalty Parameter Selection
Journal Article · Tue Jan 09 23:00:00 EST 2024 · IEEE Open Journal of Signal Processing (Online) · OSTI ID:2283368