Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning
This paper deals with the HEV real-time energy management problem using deep reinforcement learning with connected technologies such as Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I). In the HEV energy management problem, it is important to run the engine efficiently in order to minimi...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-08-01
|
Series: | Energies |
Subjects: | |
Online Access: | https://www.mdpi.com/1996-1073/14/17/5270 |
id |
doaj-fc5eacaa6cf04fc59cc82be8403c1c3c |
---|---|
record_format |
Article |
spelling |
doaj-fc5eacaa6cf04fc59cc82be8403c1c3c2021-09-09T13:42:44ZengMDPI AGEnergies1996-10732021-08-01145270527010.3390/en14175270Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement LearningShota Inuzuka0Bo Zhang1Tielong Shen2Faculty of Science and Technology, Sophia University, Tokyo 102-8554, JapanFaculty of Science and Technology, Sophia University, Tokyo 102-8554, JapanFaculty of Science and Technology, Sophia University, Tokyo 102-8554, JapanThis paper deals with the HEV real-time energy management problem using deep reinforcement learning with connected technologies such as Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I). In the HEV energy management problem, it is important to run the engine efficiently in order to minimize its total energy cost. This research proposes a policy model that takes into account road congestion and aims to learn the optimal system mode selection and power distribution when considering the far future by policy-based reinforcement learning. In the simulation, a traffic environment is generated in a virtual space by IPG CarMaker and a HEV model is prepared in MATLAB/Simulink to calculate the energy cost while driving on the road environment. The simulation validation shows the versatility of the proposed method for the test data, and in addition, it shows that considering road congestion reduces the total cost and improves the learning speed. Furthermore, we compare the proposed method with model predictive control (MPC) under the same conditions and show that the proposed method obtains more global optimal solutions.https://www.mdpi.com/1996-1073/14/17/5270HEV energy managementconnected technologydeep reinforcement learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Shota Inuzuka Bo Zhang Tielong Shen |
spellingShingle |
Shota Inuzuka Bo Zhang Tielong Shen Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning Energies HEV energy management connected technology deep reinforcement learning |
author_facet |
Shota Inuzuka Bo Zhang Tielong Shen |
author_sort |
Shota Inuzuka |
title |
Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning |
title_short |
Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning |
title_full |
Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning |
title_fullStr |
Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning |
title_full_unstemmed |
Real-Time HEV Energy Management Strategy Considering Road Congestion Based on Deep Reinforcement Learning |
title_sort |
real-time hev energy management strategy considering road congestion based on deep reinforcement learning |
publisher |
MDPI AG |
series |
Energies |
issn |
1996-1073 |
publishDate |
2021-08-01 |
description |
This paper deals with the HEV real-time energy management problem using deep reinforcement learning with connected technologies such as Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I). In the HEV energy management problem, it is important to run the engine efficiently in order to minimize its total energy cost. This research proposes a policy model that takes into account road congestion and aims to learn the optimal system mode selection and power distribution when considering the far future by policy-based reinforcement learning. In the simulation, a traffic environment is generated in a virtual space by IPG CarMaker and a HEV model is prepared in MATLAB/Simulink to calculate the energy cost while driving on the road environment. The simulation validation shows the versatility of the proposed method for the test data, and in addition, it shows that considering road congestion reduces the total cost and improves the learning speed. Furthermore, we compare the proposed method with model predictive control (MPC) under the same conditions and show that the proposed method obtains more global optimal solutions. |
topic |
HEV energy management connected technology deep reinforcement learning |
url |
https://www.mdpi.com/1996-1073/14/17/5270 |
work_keys_str_mv |
AT shotainuzuka realtimehevenergymanagementstrategyconsideringroadcongestionbasedondeepreinforcementlearning AT bozhang realtimehevenergymanagementstrategyconsideringroadcongestionbasedondeepreinforcementlearning AT tielongshen realtimehevenergymanagementstrategyconsideringroadcongestionbasedondeepreinforcementlearning |
_version_ |
1717760585059270656 |