Summary: | Autonomous driving techniques are promising for improving the energy efficiency of electrified vehicles (EVs) by adjusting driving decisions and optimizing energy requirements. Conventional energy-efficient autonomous driving methods resort to longitudinal velocity planning and fixed-route scenes, which are not sufficient to achieve optimality. In this article, a novel decision-making strategy is proposed for autonomous EVs (AEVs) to maximize energy efficiency by simultaneously considering lane-change and car-following behaviors. Leveraging the deep reinforcement learning (RL) algorithm, the proposed strategy processes complex state information of visual spatial–temporal topology and physical variables to better comprehend surrounding environments. A rule-based safety checker system is developed and integrated downstream of the RL decision-making module to improve lane-change safety. The proposed strategy is trained and evaluated in dynamic driving scenarios with interactive surrounding traffic participants. Simulation results demonstrate that the proposed strategy remarkably improves the EV’s energy economy over state-of-the-art techniques without compromising driving safety or traffic efficiency. Moreover, the results suggest that integrating visual state variables into the RL decision-making strategy is more effective at saving energy in complicated traffic situations.
|