Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning

With the increasing complexity of patrol tasks, the use of deep reinforcement learning for collaborative coverage path planning (CPP) of multi-mobile robots has become a new hotspot. Taking into account the complexity of environmental factors and operational limitations, such as terrain obstacles an...

Full description

Bibliographic Details
Main Authors: Jianjun Ni, Yu Gu, Guangyi Tang, Chunyan Ke, Yang Gu
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/13/5/944
_version_ 1797264643947757568
author Jianjun Ni
Yu Gu
Guangyi Tang
Chunyan Ke
Yang Gu
author_facet Jianjun Ni
Yu Gu
Guangyi Tang
Chunyan Ke
Yang Gu
author_sort Jianjun Ni
collection DOAJ
description With the increasing complexity of patrol tasks, the use of deep reinforcement learning for collaborative coverage path planning (CPP) of multi-mobile robots has become a new hotspot. Taking into account the complexity of environmental factors and operational limitations, such as terrain obstacles and the scope of the task area, in order to complete the CPP task better, this paper proposes an improved K-Means clustering algorithm to divide the multi-robot task area. The improved K-Means clustering algorithm improves the selection of the first initial clustering point, which makes the clustering process more reasonable and helps to distribute tasks more evenly. Simultaneously, it introduces deep reinforcement learning with a dueling network structure to better deal with terrain obstacles and improves the reward function to guide the coverage process. The simulation experiments have confirmed the advantages of this method in terms of balanced task assignment, improvement in strategy quality, and enhancement of coverage efficiency. It can reduce path duplication and omission while ensuring coverage quality.
first_indexed 2024-04-25T00:32:10Z
format Article
id doaj.art-648d0d2bd9a641ac8bcb7688b4de9a4d
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-04-25T00:32:10Z
publishDate 2024-02-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-648d0d2bd9a641ac8bcb7688b4de9a4d2024-03-12T16:42:41ZengMDPI AGElectronics2079-92922024-02-0113594410.3390/electronics13050944Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement LearningJianjun Ni0Yu Gu1Guangyi Tang2Chunyan Ke3Yang Gu4College of Artificial Intelligence and Automation, Hohai University, Changzhou 213200, ChinaCollege of Artificial Intelligence and Automation, Hohai University, Changzhou 213200, ChinaCollege of Artificial Intelligence and Automation, Hohai University, Changzhou 213200, ChinaCollege of Information Science and Engineering, Hohai University, Changzhou 213200, ChinaCollege of Artificial Intelligence and Automation, Hohai University, Changzhou 213200, ChinaWith the increasing complexity of patrol tasks, the use of deep reinforcement learning for collaborative coverage path planning (CPP) of multi-mobile robots has become a new hotspot. Taking into account the complexity of environmental factors and operational limitations, such as terrain obstacles and the scope of the task area, in order to complete the CPP task better, this paper proposes an improved K-Means clustering algorithm to divide the multi-robot task area. The improved K-Means clustering algorithm improves the selection of the first initial clustering point, which makes the clustering process more reasonable and helps to distribute tasks more evenly. Simultaneously, it introduces deep reinforcement learning with a dueling network structure to better deal with terrain obstacles and improves the reward function to guide the coverage process. The simulation experiments have confirmed the advantages of this method in terms of balanced task assignment, improvement in strategy quality, and enhancement of coverage efficiency. It can reduce path duplication and omission while ensuring coverage quality.https://www.mdpi.com/2079-9292/13/5/944coverage path planningdeep reinforcement learningdueling networkimproved K-Means clustering algorithmmulti-mobile robots
spellingShingle Jianjun Ni
Yu Gu
Guangyi Tang
Chunyan Ke
Yang Gu
Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
Electronics
coverage path planning
deep reinforcement learning
dueling network
improved K-Means clustering algorithm
multi-mobile robots
title Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
title_full Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
title_fullStr Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
title_full_unstemmed Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
title_short Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
title_sort cooperative coverage path planning for multi mobile robots based on improved k means clustering and deep reinforcement learning
topic coverage path planning
deep reinforcement learning
dueling network
improved K-Means clustering algorithm
multi-mobile robots
url https://www.mdpi.com/2079-9292/13/5/944
work_keys_str_mv AT jianjunni cooperativecoveragepathplanningformultimobilerobotsbasedonimprovedkmeansclusteringanddeepreinforcementlearning
AT yugu cooperativecoveragepathplanningformultimobilerobotsbasedonimprovedkmeansclusteringanddeepreinforcementlearning
AT guangyitang cooperativecoveragepathplanningformultimobilerobotsbasedonimprovedkmeansclusteringanddeepreinforcementlearning
AT chunyanke cooperativecoveragepathplanningformultimobilerobotsbasedonimprovedkmeansclusteringanddeepreinforcementlearning
AT yanggu cooperativecoveragepathplanningformultimobilerobotsbasedonimprovedkmeansclusteringanddeepreinforcementlearning