Distributed and Scalable Cooperative Formation of Unmanned Ground Vehicles Using Deep Reinforcement Learning

Cooperative formation control of unmanned ground vehicles (UGVs) has become one of the important research hotspots in the application of UGV and attracted more and more attention in the military and civil fields. Compared with traditional formation control algorithms, reinforcement-learning-based al...

Full description

Bibliographic Details
Main Authors: Shichun Huang, Tao Wang, Yong Tang, Yiwen Hu, Gu Xin, Dianle Zhou
Format: Article
Language:English
Published: MDPI AG 2023-01-01
Series:Aerospace
Subjects:
Online Access:https://www.mdpi.com/2226-4310/10/2/96
Description
Summary:Cooperative formation control of unmanned ground vehicles (UGVs) has become one of the important research hotspots in the application of UGV and attracted more and more attention in the military and civil fields. Compared with traditional formation control algorithms, reinforcement-learning-based algorithms can provide a new solution with a lower complexity for real-time formation control by equipping UGVs with artificial intelligence. Therefore, in this paper, a distributed deep-reinforcement-learning-based cooperative formation control algorithm is proposed to solve the navigation, maintenance, and obstacle avoidance tasks of UGV formations. More importantly, the hierarchical triangular formation structure and the newly designed Markov decision process for UGV formations of leader and follower attributes make the control strategy learned by the algorithm reusable, so that the formation can arbitrarily increase the number of UGVs and realize a more flexible expansion. The effectiveness and scalability of the algorithm is verified by formation simulation experiments of different scales.
ISSN:2226-4310