Learning Time Reduction Using Warm Start Methods for a Reinforcement Learning Based Supervisory Control in Hybrid Electric Vehicle Applications

27 Oct 2020  ·  Bin Xu, Jun Hou, Junzhe Shi, Huayi Li, Dhruvang Rathod, Zhe Wang, Zoran Filipi ·

Reinforcement Learning (RL) is widely utilized in the field of robotics, and as such, it is gradually being implemented in the Hybrid Electric Vehicle (HEV) supervisory control. Even though RL exhibits excellent performance in terms of fuel consumption minimization in simulation, the large learning iteration number needs a long learning time, making it hardly applicable in real-world vehicles. In addition, the fuel consumption of initial learning phases is much worse than baseline controls. This study aims to reduce the learning iterations of Q-learning in HEV application and improve fuel consumption in initial learning phases utilizing warm start methods. Different from previous studies, which initiated Q-learning with zero or random Q values, this study initiates the Q-learning with different supervisory controls (i.e., Equivalent Consumption Minimization Strategy control and heuristic control), and detailed analysis is given. The results show that the proposed warm start Q-learning requires 68.8% fewer iterations than cold start Q-learning. The trained Q-learning is validated in two different driving cycles, and the results show 10-16% MPG improvement when compared to Equivalent Consumption Minimization Strategy control. Furthermore, real-time feasibility is analyzed, and the guidance of vehicle implementation is provided. The results of this study can be used to facilitate the deployment of RL in vehicle supervisory control applications.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods