Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation
The efficiency and dynamics of hybrid electric vehicles are inherently linked to effective energy management strategies. However, complexity is heightened due to uncertainty and variations in real driving conditions. This article introduces an innovative strategy for extended-range electric vehicles...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-01-01
|
Series: | Energies |
Subjects: | |
Online Access: | https://www.mdpi.com/1996-1073/17/2/514 |
_version_ | 1797344118964224000 |
---|---|
author | Christian Montaleza Paul Arévalo Jimmy Gallegos Francisco Jurado |
author_facet | Christian Montaleza Paul Arévalo Jimmy Gallegos Francisco Jurado |
author_sort | Christian Montaleza |
collection | DOAJ |
description | The efficiency and dynamics of hybrid electric vehicles are inherently linked to effective energy management strategies. However, complexity is heightened due to uncertainty and variations in real driving conditions. This article introduces an innovative strategy for extended-range electric vehicles, grounded in the optimization of driving cycles, prediction of driving conditions, and predictive control through neural networks. First, the challenges of the energy management system are addressed by merging deep reinforcement learning with strongly convex objective optimization, giving rise to a pioneering method called DQL-AMSGrad. Subsequently, the DQL algorithm has been implemented, allowing temporal difference-based updates to adjust Q values to maximize the expected cumulative reward. The loss function is calculated as the mean squared error between the current estimate and the calculated target. The AMSGrad optimization method has been applied to efficiently adjust the weights of the artificial neural network. Hyperparameters such as the learning rate and discount factor have been tuned using data collected during real-world driving tests. This strategy tackles the “curse of dimensionality” and demonstrates a 30% improvement in adaptability to changing environmental conditions. With a 20%-faster convergence speed and a 15%-superior effectiveness in updating neural network weights compared to conventional approaches, it also highlights an 18% reduction in fuel consumption in a case study with the Nissan Xtrail e-POWER system, validating its practical applicability. |
first_indexed | 2024-03-08T10:58:41Z |
format | Article |
id | doaj.art-e4bad1f5af354866b1d7ddc83f8f1bed |
institution | Directory Open Access Journal |
issn | 1996-1073 |
language | English |
last_indexed | 2024-03-08T10:58:41Z |
publishDate | 2024-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Energies |
spelling | doaj.art-e4bad1f5af354866b1d7ddc83f8f1bed2024-01-26T16:22:07ZengMDPI AGEnergies1996-10732024-01-0117251410.3390/en17020514Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State RepresentationChristian Montaleza0Paul Arévalo1Jimmy Gallegos2Francisco Jurado3Department of Electrical Engineering, Superior Polytechnic School of Linares, University of Jaén, 23700 Linares, Jaén, SpainDepartment of Electrical Engineering, Superior Polytechnic School of Linares, University of Jaén, 23700 Linares, Jaén, SpainDepartment of Electrical Engineering, Superior Polytechnic School of Linares, University of Jaén, 23700 Linares, Jaén, SpainDepartment of Electrical Engineering, Superior Polytechnic School of Linares, University of Jaén, 23700 Linares, Jaén, SpainThe efficiency and dynamics of hybrid electric vehicles are inherently linked to effective energy management strategies. However, complexity is heightened due to uncertainty and variations in real driving conditions. This article introduces an innovative strategy for extended-range electric vehicles, grounded in the optimization of driving cycles, prediction of driving conditions, and predictive control through neural networks. First, the challenges of the energy management system are addressed by merging deep reinforcement learning with strongly convex objective optimization, giving rise to a pioneering method called DQL-AMSGrad. Subsequently, the DQL algorithm has been implemented, allowing temporal difference-based updates to adjust Q values to maximize the expected cumulative reward. The loss function is calculated as the mean squared error between the current estimate and the calculated target. The AMSGrad optimization method has been applied to efficiently adjust the weights of the artificial neural network. Hyperparameters such as the learning rate and discount factor have been tuned using data collected during real-world driving tests. This strategy tackles the “curse of dimensionality” and demonstrates a 30% improvement in adaptability to changing environmental conditions. With a 20%-faster convergence speed and a 15%-superior effectiveness in updating neural network weights compared to conventional approaches, it also highlights an 18% reduction in fuel consumption in a case study with the Nissan Xtrail e-POWER system, validating its practical applicability.https://www.mdpi.com/1996-1073/17/2/514extended-range electric vehiclesdeep reinforcement learningenergy management systemfuel consumption reduction |
spellingShingle | Christian Montaleza Paul Arévalo Jimmy Gallegos Francisco Jurado Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation Energies extended-range electric vehicles deep reinforcement learning energy management system fuel consumption reduction |
title | Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation |
title_full | Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation |
title_fullStr | Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation |
title_full_unstemmed | Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation |
title_short | Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation |
title_sort | enhancing energy management strategies for extended range electric vehicles through deep q learning and continuous state representation |
topic | extended-range electric vehicles deep reinforcement learning energy management system fuel consumption reduction |
url | https://www.mdpi.com/1996-1073/17/2/514 |
work_keys_str_mv | AT christianmontaleza enhancingenergymanagementstrategiesforextendedrangeelectricvehiclesthroughdeepqlearningandcontinuousstaterepresentation AT paularevalo enhancingenergymanagementstrategiesforextendedrangeelectricvehiclesthroughdeepqlearningandcontinuousstaterepresentation AT jimmygallegos enhancingenergymanagementstrategiesforextendedrangeelectricvehiclesthroughdeepqlearningandcontinuousstaterepresentation AT franciscojurado enhancingenergymanagementstrategiesforextendedrangeelectricvehiclesthroughdeepqlearningandcontinuousstaterepresentation |