skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A Cooperative Multi-Agent Deep Reinforcement Learning Framework for Real-Time Residential Load Scheduling

Conference ·

Internet-of-Things (IoT) enabled monitoring and control capabilities are enabling increasing numbers of household users with controllable loads to actively participate in smart grid energy management. Realizing an efficient real-time energy management system that takes advantage of these developments requires novel techniques for managing the increased complexity of the control action space in resolving multiple challenges such as the uncertainty in energy prices and renewable energy output along with the need to satisy physical grid constraints such as transformer capacity. Addressing these challenges, we develop a multi-household energy management framework for residential units connected to the same transformer and containing DERs such as PV, ESS and controllable loads. The goal of our framework is to schedule controllable household appliances and ESS such that the cost of procuring electricity from the utility over a horizon is minimized while physical grid constraints are satisfied at each scheduling step. Traditional energy management frameworks either perform global optimization to satisfy grid constraints but suffer from high computational complexity (for example Integer Program, Mixed Integer Programming frameworks and centralized reinforcement learning) or perform decentralized real-time energy management without satisfying global grid constraints (for example multi-agent reinforcement learning with no cooperation). In contrast, we propose a cooperative multiagent reinforcement learning (MARL) framework that i) operates in real-time, and ii) performs explicit collaboration to satisfy global grid constraints. The novelty in our framework is two fold. Firstly, our framework trains multiple independent learners (IL) for each household in parallel using historical data and performs real-time inferencing of control actions using the most recent system state. Secondly, our framework contains a low complexity knapsack based cooperation agent which combines the outputs of ILs to minimize cost while satisfying grid constraints. Simulation results show that our cooperative MARL approach achieves significant cost improvement over centralized reinforcement learning and day-ahead planning baselines. Moreover, our approach strictly satisfies physical constraints with no apriori knowledge of system dynamics while the baseline approaches have occasional violations. We also measure the training and inference time by ranging the number of households from 1 to 25. Results show that our cooperative MARL approach scales best among various approaches.

Research Organization:
Univ. of Southern California, Los Angeles, CA (United States)
Sponsoring Organization:
USDOE Office of Energy Efficiency and Renewable Energy (EERE), Renewable Power Office. Solar Energy Technologies Office
DOE Contract Number:
EE0008003
OSTI ID:
1607511
Report Number(s):
EE0008003-3
Resource Relation:
Conference: The 4th ACM/IEEE International Conference on Internet of Things Design and Implementation (IoTDI), 2019
Country of Publication:
United States
Language:
English

References (15)

Load Scheduling for Household Energy Consumption Optimization journal December 2013
A Framework for Volt-VAR Optimization in Distribution Systems journal May 2015
Approximating Multiobjective Knapsack Problems journal December 2002
Branch flow model: Relaxations and convexification conference December 2012
Household response to dynamic pricing of electricity: a survey of 15 experiments journal August 2010
Long Short-Term Memory journal November 1997
Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning journal September 2016
Sparse cooperative Q-learning conference January 2004
Distributed Rate Control for Smart Solar Arrays
  • Lee, Stephen; Iyengar, Srinivasan; Irwin, David
  • e-Energy '17: The Eighth International Conference on Future Energy Systems, Proceedings of the Eighth International Conference on Future Energy Systems https://doi.org/10.1145/3077839.3077840
conference May 2017
Residential Energy Management in Smart Grid: A Markov Decision Process-Based Approach conference August 2013
A centralized reinforcement learning method for multi-agent job scheduling in Grid conference October 2016
A Reliability Perspective of the Smart Grid journal June 2010
Internet of Things in the 5G Era: Enablers, Architecture, and Business Models journal March 2016
Mastering the game of Go without human knowledge journal October 2017
Reinforcement Learning: An Introduction journal September 1998