A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control

As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying...

Full description

Bibliographic Details
Main Authors: Jihun Kim, Sanghoon Park, Jeesu Kim, Jinwoo Yoo
Format: Article
Language:English
Published: MDPI AG 2023-12-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/24/9843
Description
Summary:As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying on autonomous driving technologies, the importance of safety features, such as fail-safe mechanisms in the event of sensor failures, has gained prominence. Therefore, this paper proposes a reinforcement learning (RL) control method for lane-keeping, which uses surrounding object information derived through LiDAR sensors instead of camera sensors for LKS. This approach uses surrounding vehicle and object information as observations for the RL framework to maintain the vehicle’s current lane. The learning environment is established by integrating simulation tools, such as IPG CarMaker, which incorporates vehicle dynamics, and MATLAB Simulink for data analysis and RL model creation. To further validate the applicability of the LiDAR sensor data in real-world settings, Gaussian noise is introduced in the virtual simulation environment to mimic sensor noise in actual operational conditions.
ISSN:1424-8220