skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Gnu-RL: A Precocial Reinforcement Learning Solution for Building HVAC Control Using a Differentiable MPC Policy

Conference ·

Reinforcement learning (RL) was first demonstrated to be a feasible approach to controlling heating, ventilation, and air conditioning (HVAC) systems more than a decade ago. However, there has been limited progress towards a practical and scalable RL solution for HVAC control. While one can train an RL agent in simulation, it is not cost-effective to create a model for each thermal zone or building. Likewise, existing RL agents generally take a long time to learn and are opaque to expert interrogation, making them unattractive for real-world deployment. To tackle these challenges, we propose Gnu-RL: a novel approach that enables practical deployment of RL for HVAC control and re- quires no prior information other than historical data from exist- ing HVAC controllers. To achieve this, Gnu-RL adopts a recently- developed Differentiable Model Predictive Control (MPC) policy, which encodes domain knowledge on planning and system dynam- ics, making it both sample-efficient and interpretable. Prior to any interaction with the environment, a Gnu-RL agent is pre-trained on historical data using imitation learning, which enables it to match the behavior of the existing controller. Once it is put in charge of controlling the environment, the agent continues to improve its policy end-to-end, using a policy gradient algorithm. We evaluate Gnu-RL on both an EnergyPlus model and a real- world testbed. In both experiments, our agents were directly de- ployed in the environment after offline pre-training on expert demonstration. In the simulation experiment, our approach saved 6.6% energy compared to the best published RL result for the same environment, while maintaining a higher level of occupant comfort. Next, Gnu-RL was deployed to control the HVAC of a real-world conference room for a three-week period. Our results show that Gnu-RL saved 16.7% of cooling demand compared to the existing controller and tracked temperature set-point better.

Research Organization:
Carnegie Mellon Univ., Pittsburgh, PA (United States)
Sponsoring Organization:
USDOE Office of Energy Efficiency and Renewable Energy (EERE), Energy Efficiency Office. Building Technologies Office
DOE Contract Number:
EE0007682
OSTI ID:
1576205
Report Number(s):
DOE-CMU-EE0007682
Resource Relation:
Conference: ACM BuildSys, New York, NY, Nov. 2019; Related Information: @inproceedings{Chen:2019:GPR:3360322.3360849,author = {Chen, Bingqing and Cai, Zicheng and Berg{\'e}s, Mario},title = {Gnu-RL: A Precocial Reinforcement Learning Solution for Building HVAC Control Using a Differentiable MPC Policy},booktitle = {Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation},series = {BuildSys '19},year = {2019},isbn = {978-1-4503-7005-9},location = {New York, NY, USA},pages = {316--325},numpages = {10},url = {http://doi.acm.org/10.1145/3360322.3360849},doi = {10.1145/3360322.3360849},acmid = {3360849},publisher = {ACM},address = {New York, NY, USA},keywords = {Deep Reinforcement Learning, HVAC Control},}
Country of Publication:
United States
Language:
English

References (3)