skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Minimizing Energy Consumption from Connected Signalized Intersections by Reinforcement Learning

Abstract

Explicit energy minimization objectives are often discouraged in signal optimization algorithms due to its negative impact on mobility performance. One potential direction to solve this problem is to provide a balanced objective function to achieve desired mobility with minimized energy consumption. This research developed a reinforcement learning (RL) based control with reward functions considering energy and mobility in a joint manner-a penalty function is introduced for number of stops. Further, we proposed a clustering-based technique to make the state-space finite which is critical for a tractable implementation of the RL algorithm. We implemented the algorithm in a calibrated NG-SIM network within a traffic micro-simulator-PTV VISSIM. With sole focus on energy, we report 47% reduction in energy consumption when compared with existing signal control schemes, however causing a 65.6% increase in system travel time. In contrast, the control strategy focusing on energy minimization with penalty for stops yields 6.7% reduction in energy consumption with 27% increase in system travel time. The developed RL algorithm with a flexible penalty function in the reward will achieve desired energy goals for a network of signalized intersections without compromising on the mobility performance.

Authors:
 [1];  [2];  [3]; ORCiD logo [4]
  1. Washington State University
  2. Oak Ridge National Laboratory
  3. Pacific Northwest National Laboratory
  4. National Renewable Energy Laboratory (NREL), Golden, CO (United States)
Publication Date:
Research Org.:
National Renewable Energy Lab. (NREL), Golden, CO (United States)
Sponsoring Org.:
USDOE Office of Energy Efficiency and Renewable Energy (EERE), Vehicle Technologies Office (EE-3V)
OSTI Identifier:
1494718
Report Number(s):
NREL/CP-5400-73257
DOE Contract Number:  
AC36-08GO28308
Resource Type:
Conference
Resource Relation:
Conference: Presented at the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 4-7 November 2018, Maui, Hawaii
Country of Publication:
United States
Language:
English
Subject:
32 ENERGY CONSERVATION, CONSUMPTION, AND UTILIZATION; reinforcement learning; fuel consumption; energy minimization; connected vehicles; traffic state observability

Citation Formats

Bin Al Islam, S. M. A., Aziz, H. M. Abdul, Wang, Hong, and Young, Stanley E. Minimizing Energy Consumption from Connected Signalized Intersections by Reinforcement Learning. United States: N. p., 2018. Web. doi:10.1109/ITSC.2018.8569891.
Bin Al Islam, S. M. A., Aziz, H. M. Abdul, Wang, Hong, & Young, Stanley E. Minimizing Energy Consumption from Connected Signalized Intersections by Reinforcement Learning. United States. doi:10.1109/ITSC.2018.8569891.
Bin Al Islam, S. M. A., Aziz, H. M. Abdul, Wang, Hong, and Young, Stanley E. Mon . "Minimizing Energy Consumption from Connected Signalized Intersections by Reinforcement Learning". United States. doi:10.1109/ITSC.2018.8569891.
@article{osti_1494718,
title = {Minimizing Energy Consumption from Connected Signalized Intersections by Reinforcement Learning},
author = {Bin Al Islam, S. M. A. and Aziz, H. M. Abdul and Wang, Hong and Young, Stanley E},
abstractNote = {Explicit energy minimization objectives are often discouraged in signal optimization algorithms due to its negative impact on mobility performance. One potential direction to solve this problem is to provide a balanced objective function to achieve desired mobility with minimized energy consumption. This research developed a reinforcement learning (RL) based control with reward functions considering energy and mobility in a joint manner-a penalty function is introduced for number of stops. Further, we proposed a clustering-based technique to make the state-space finite which is critical for a tractable implementation of the RL algorithm. We implemented the algorithm in a calibrated NG-SIM network within a traffic micro-simulator-PTV VISSIM. With sole focus on energy, we report 47% reduction in energy consumption when compared with existing signal control schemes, however causing a 65.6% increase in system travel time. In contrast, the control strategy focusing on energy minimization with penalty for stops yields 6.7% reduction in energy consumption with 27% increase in system travel time. The developed RL algorithm with a flexible penalty function in the reward will achieve desired energy goals for a network of signalized intersections without compromising on the mobility performance.},
doi = {10.1109/ITSC.2018.8569891},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2018},
month = {12}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: