AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning
In a complex underwater environment, finding a viable, collision-free path for an autonomous underwater vehicle (AUV) is a challenging task. The purpose of this paper is to establish a safe, real-time, and robust method of collision avoidance that improves the autonomy of AUVs. We propose a method b...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-10-01
|
Series: | Journal of Marine Science and Engineering |
Subjects: | |
Online Access: | https://www.mdpi.com/2077-1312/9/11/1166 |
_version_ | 1797509799313670144 |
---|---|
author | Jianya Yuan Hongjian Wang Honghan Zhang Changjian Lin Dan Yu Chengfeng Li |
author_facet | Jianya Yuan Hongjian Wang Honghan Zhang Changjian Lin Dan Yu Chengfeng Li |
author_sort | Jianya Yuan |
collection | DOAJ |
description | In a complex underwater environment, finding a viable, collision-free path for an autonomous underwater vehicle (AUV) is a challenging task. The purpose of this paper is to establish a safe, real-time, and robust method of collision avoidance that improves the autonomy of AUVs. We propose a method based on active sonar, which utilizes a deep reinforcement learning algorithm to learn the processed sonar information to navigate the AUV in an uncertain environment. We compare the performance of double deep Q-network algorithms with that of a genetic algorithm and deep learning. We propose a line-of-sight guidance method to mitigate abrupt changes in the yaw direction and smooth the heading changes when the AUV switches trajectory. The different experimental results show that the double deep Q-network algorithms ensure excellent collision avoidance performance. The effectiveness of the algorithm proposed in this paper was verified in three environments: random static, mixed static, and complex dynamic. The results show that the proposed algorithm has significant advantages over other algorithms in terms of success rate, collision avoidance performance, and generalization ability. The double deep Q-network algorithm proposed in this paper is superior to the genetic algorithm and deep learning in terms of the running time, total path, performance in avoiding collisions with moving obstacles, and planning time for each step. After the algorithm is trained in a simulated environment, it can still perform online learning according to the information of the environment after deployment and adjust the weight of the network in real-time. These results demonstrate that the proposed approach has significant potential for practical applications. |
first_indexed | 2024-03-10T05:23:55Z |
format | Article |
id | doaj.art-aa4f8fea66634cbb8579ebfc57708835 |
institution | Directory Open Access Journal |
issn | 2077-1312 |
language | English |
last_indexed | 2024-03-10T05:23:55Z |
publishDate | 2021-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Journal of Marine Science and Engineering |
spelling | doaj.art-aa4f8fea66634cbb8579ebfc577088352023-11-22T23:52:45ZengMDPI AGJournal of Marine Science and Engineering2077-13122021-10-01911116610.3390/jmse9111166AUV Obstacle Avoidance Planning Based on Deep Reinforcement LearningJianya Yuan0Hongjian Wang1Honghan Zhang2Changjian Lin3Dan Yu4Chengfeng Li5College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 045100, ChinaCollege of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 045100, ChinaCollege of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 045100, ChinaSchool of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221000, ChinaCollege of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 045100, ChinaCollege of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 045100, ChinaIn a complex underwater environment, finding a viable, collision-free path for an autonomous underwater vehicle (AUV) is a challenging task. The purpose of this paper is to establish a safe, real-time, and robust method of collision avoidance that improves the autonomy of AUVs. We propose a method based on active sonar, which utilizes a deep reinforcement learning algorithm to learn the processed sonar information to navigate the AUV in an uncertain environment. We compare the performance of double deep Q-network algorithms with that of a genetic algorithm and deep learning. We propose a line-of-sight guidance method to mitigate abrupt changes in the yaw direction and smooth the heading changes when the AUV switches trajectory. The different experimental results show that the double deep Q-network algorithms ensure excellent collision avoidance performance. The effectiveness of the algorithm proposed in this paper was verified in three environments: random static, mixed static, and complex dynamic. The results show that the proposed algorithm has significant advantages over other algorithms in terms of success rate, collision avoidance performance, and generalization ability. The double deep Q-network algorithm proposed in this paper is superior to the genetic algorithm and deep learning in terms of the running time, total path, performance in avoiding collisions with moving obstacles, and planning time for each step. After the algorithm is trained in a simulated environment, it can still perform online learning according to the information of the environment after deployment and adjust the weight of the network in real-time. These results demonstrate that the proposed approach has significant potential for practical applications.https://www.mdpi.com/2077-1312/9/11/1166autonomous underwater vehicle (AUV)collision avoidance planningdeep reinforcement learning (DRL)double-DQN (D-DQN) |
spellingShingle | Jianya Yuan Hongjian Wang Honghan Zhang Changjian Lin Dan Yu Chengfeng Li AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning Journal of Marine Science and Engineering autonomous underwater vehicle (AUV) collision avoidance planning deep reinforcement learning (DRL) double-DQN (D-DQN) |
title | AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning |
title_full | AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning |
title_fullStr | AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning |
title_full_unstemmed | AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning |
title_short | AUV Obstacle Avoidance Planning Based on Deep Reinforcement Learning |
title_sort | auv obstacle avoidance planning based on deep reinforcement learning |
topic | autonomous underwater vehicle (AUV) collision avoidance planning deep reinforcement learning (DRL) double-DQN (D-DQN) |
url | https://www.mdpi.com/2077-1312/9/11/1166 |
work_keys_str_mv | AT jianyayuan auvobstacleavoidanceplanningbasedondeepreinforcementlearning AT hongjianwang auvobstacleavoidanceplanningbasedondeepreinforcementlearning AT honghanzhang auvobstacleavoidanceplanningbasedondeepreinforcementlearning AT changjianlin auvobstacleavoidanceplanningbasedondeepreinforcementlearning AT danyu auvobstacleavoidanceplanningbasedondeepreinforcementlearning AT chengfengli auvobstacleavoidanceplanningbasedondeepreinforcementlearning |