Ad Hoc-Obstacle Avoidance-Based Navigation System Using Deep Reinforcement Learning for Self-Driving Vehicles

In this research, a novel navigation algorithm for self-driving vehicles that avoids collisions with pedestrians and ad hoc obstacles is described. The proposed algorithm predicts the locations of ad hoc obstacles and wandering pedestrians by using an RGB-D depth sensor. Unique ad hoc-obstacle-aware...

Full description

Bibliographic Details
Main Authors: N. S. Manikandan, Ganesan Kaliyaperumal, Yong Wang
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10189852/
Description
Summary:In this research, a novel navigation algorithm for self-driving vehicles that avoids collisions with pedestrians and ad hoc obstacles is described. The proposed algorithm predicts the locations of ad hoc obstacles and wandering pedestrians by using an RGB-D depth sensor. Unique ad hoc-obstacle-aware mobility rules are presented considering those environmental uncertainties. A Deep Reinforcement Learning (DRL) algorithm is proposed as a decision-making technique (to steer the self-driving vehicle to reach the target without incident). The deep Q-network (DQN), double deep Q-network (DDQN), and dueling double deep Q-network (D3DQN) algorithms were compared, and the D3DQN had the fewest negative rewards. We tested the algorithms using the Carla simulation environment to examine the input values from the RGB-D and RGB-Lidar. The series of algorithms that make up the convoluted neural network D3DQN was consequently selected as the optimum DRL algorithm. In the modeling of slow-moving urban traffic, RGB-D and RGB-Lidar generated essentially the same results. A self-driving version of an updated child-ride-on-car was modified to demonstrate the real-time effectiveness of the proposed algorithm.
ISSN:2169-3536