Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning
With the wide application of intelligent unmanned vehicles,intelligent navigation,path planning and obstacle avoidance technology have become important research contents.This paper proposes model-free deep reinforcement learning algorithms DDPG and SAC,which use environmental information to track to...
Main Author: | |
---|---|
Format: | Article |
Language: | zho |
Published: |
Editorial office of Computer Science
2023-01-01
|
Series: | Jisuanji kexue |
Subjects: | |
Online Access: | https://www.jsjkx.com/fileup/1002-137X/PDF/1002-137X-2023-50-1-194.pdf |
_version_ | 1797845120668663808 |
---|---|
author | HUANG Yuzhou, WANG Lisong, QIN Xiaolin |
author_facet | HUANG Yuzhou, WANG Lisong, QIN Xiaolin |
author_sort | HUANG Yuzhou, WANG Lisong, QIN Xiaolin |
collection | DOAJ |
description | With the wide application of intelligent unmanned vehicles,intelligent navigation,path planning and obstacle avoidance technology have become important research contents.This paper proposes model-free deep reinforcement learning algorithms DDPG and SAC,which use environmental information to track to the target point,avoid static and dynamic obstacles,and can be generally suitable for different environments.Through the combination of global planning and local obstacle avoidance,it solves the path planning problem with better globality and robustness,solves the obstacle avoidance problem with better dynamicity and generalization,and shortens the iteration time.In the network training stage,PID,A<sup>*</sup> and other traditional algorithms are combined to improve the convergence speed and stability of the method.Finally,a variety of experimental scenarios such as navigation and obstacle avoidance are designed in the robot operating system ROS and the simulation program gazebo.Simulation results verify the reliability of the proposed approach,which takes the global and dynamic nature of the problem into account and optimizes the generated paths and time efficiency. |
first_indexed | 2024-04-09T17:33:28Z |
format | Article |
id | doaj.art-45f4c6077f754ed39e25c601819e8da8 |
institution | Directory Open Access Journal |
issn | 1002-137X |
language | zho |
last_indexed | 2024-04-09T17:33:28Z |
publishDate | 2023-01-01 |
publisher | Editorial office of Computer Science |
record_format | Article |
series | Jisuanji kexue |
spelling | doaj.art-45f4c6077f754ed39e25c601819e8da82023-04-18T02:33:09ZzhoEditorial office of Computer ScienceJisuanji kexue1002-137X2023-01-0150119420410.11896/jsjkx.220500241Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement LearningHUANG Yuzhou, WANG Lisong, QIN Xiaolin0College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,ChinaWith the wide application of intelligent unmanned vehicles,intelligent navigation,path planning and obstacle avoidance technology have become important research contents.This paper proposes model-free deep reinforcement learning algorithms DDPG and SAC,which use environmental information to track to the target point,avoid static and dynamic obstacles,and can be generally suitable for different environments.Through the combination of global planning and local obstacle avoidance,it solves the path planning problem with better globality and robustness,solves the obstacle avoidance problem with better dynamicity and generalization,and shortens the iteration time.In the network training stage,PID,A<sup>*</sup> and other traditional algorithms are combined to improve the convergence speed and stability of the method.Finally,a variety of experimental scenarios such as navigation and obstacle avoidance are designed in the robot operating system ROS and the simulation program gazebo.Simulation results verify the reliability of the proposed approach,which takes the global and dynamic nature of the problem into account and optimizes the generated paths and time efficiency.https://www.jsjkx.com/fileup/1002-137X/PDF/1002-137X-2023-50-1-194.pdfunmanned vehicle|obstacle avoidance|path planning|deep reinforcement learning |
spellingShingle | HUANG Yuzhou, WANG Lisong, QIN Xiaolin Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning Jisuanji kexue unmanned vehicle|obstacle avoidance|path planning|deep reinforcement learning |
title | Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning |
title_full | Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning |
title_fullStr | Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning |
title_full_unstemmed | Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning |
title_short | Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning |
title_sort | bi level path planning method for unmanned vehicle based on deep reinforcement learning |
topic | unmanned vehicle|obstacle avoidance|path planning|deep reinforcement learning |
url | https://www.jsjkx.com/fileup/1002-137X/PDF/1002-137X-2023-50-1-194.pdf |
work_keys_str_mv | AT huangyuzhouwanglisongqinxiaolin bilevelpathplanningmethodforunmannedvehiclebasedondeepreinforcementlearning |