Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination

Abstract This work analyses the performance of Reinforcement Learning (RL) versus Swarm Intelligence (SI) for coordinating multiple unmanned High Altitude Platform Stations (HAPS) for communications area coverage. It builds upon previous work which looked at various elements of both algorithms. The...

Full description

Bibliographic Details
Main Authors: Ogbonnaya Anicho, Philip B. Charlesworth, Gurvinder S. Baicher, Atulya K. Nagar
Format: Article
Language:English
Published: Springer 2021-05-01
Series:SN Applied Sciences
Subjects:
Online Access:https://doi.org/10.1007/s42452-021-04658-6
_version_ 1824046158454980608
author Ogbonnaya Anicho
Philip B. Charlesworth
Gurvinder S. Baicher
Atulya K. Nagar
author_facet Ogbonnaya Anicho
Philip B. Charlesworth
Gurvinder S. Baicher
Atulya K. Nagar
author_sort Ogbonnaya Anicho
collection DOAJ
description Abstract This work analyses the performance of Reinforcement Learning (RL) versus Swarm Intelligence (SI) for coordinating multiple unmanned High Altitude Platform Stations (HAPS) for communications area coverage. It builds upon previous work which looked at various elements of both algorithms. The main aim of this paper is to address the continuous state-space challenge within this work by using partitioning to manage the high dimensionality problem. This enabled comparing the performance of the classical cases of both RL and SI establishing a baseline for future comparisons of improved versions. From previous work, SI was observed to perform better across various key performance indicators. However, after tuning parameters and empirically choosing suitable partitioning ratio for the RL state space, it was observed that the SI algorithm still maintained superior coordination capability by achieving higher mean overall user coverage (about 20% better than the RL algorithm), in addition to faster convergence rates. Though the RL technique showed better average peak user coverage, the unpredictable coverage dip was a key weakness, making SI a more suitable algorithm within the context of this work.
first_indexed 2024-12-23T00:30:52Z
format Article
id doaj.art-0316747f277543a1b24c8ba84a999016
institution Directory Open Access Journal
issn 2523-3963
2523-3971
language English
last_indexed 2024-12-23T00:30:52Z
publishDate 2021-05-01
publisher Springer
record_format Article
series SN Applied Sciences
spelling doaj.art-0316747f277543a1b24c8ba84a9990162022-12-21T18:06:56ZengSpringerSN Applied Sciences2523-39632523-39712021-05-013611110.1007/s42452-021-04658-6Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordinationOgbonnaya Anicho0Philip B. Charlesworth1Gurvinder S. Baicher2Atulya K. Nagar3Liverpool Hope UniversityLiverpool Hope UniversityLiverpool Hope UniversityLiverpool Hope UniversityAbstract This work analyses the performance of Reinforcement Learning (RL) versus Swarm Intelligence (SI) for coordinating multiple unmanned High Altitude Platform Stations (HAPS) for communications area coverage. It builds upon previous work which looked at various elements of both algorithms. The main aim of this paper is to address the continuous state-space challenge within this work by using partitioning to manage the high dimensionality problem. This enabled comparing the performance of the classical cases of both RL and SI establishing a baseline for future comparisons of improved versions. From previous work, SI was observed to perform better across various key performance indicators. However, after tuning parameters and empirically choosing suitable partitioning ratio for the RL state space, it was observed that the SI algorithm still maintained superior coordination capability by achieving higher mean overall user coverage (about 20% better than the RL algorithm), in addition to faster convergence rates. Though the RL technique showed better average peak user coverage, the unpredictable coverage dip was a key weakness, making SI a more suitable algorithm within the context of this work.https://doi.org/10.1007/s42452-021-04658-6Swarm intelligenceReinforcement learningMulti-HAPSAutonomous coordination
spellingShingle Ogbonnaya Anicho
Philip B. Charlesworth
Gurvinder S. Baicher
Atulya K. Nagar
Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination
SN Applied Sciences
Swarm intelligence
Reinforcement learning
Multi-HAPS
Autonomous coordination
title Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination
title_full Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination
title_fullStr Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination
title_full_unstemmed Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination
title_short Reinforcement learning versus swarm intelligence for autonomous multi-HAPS coordination
title_sort reinforcement learning versus swarm intelligence for autonomous multi haps coordination
topic Swarm intelligence
Reinforcement learning
Multi-HAPS
Autonomous coordination
url https://doi.org/10.1007/s42452-021-04658-6
work_keys_str_mv AT ogbonnayaanicho reinforcementlearningversusswarmintelligenceforautonomousmultihapscoordination
AT philipbcharlesworth reinforcementlearningversusswarmintelligenceforautonomousmultihapscoordination
AT gurvindersbaicher reinforcementlearningversusswarmintelligenceforautonomousmultihapscoordination
AT atulyaknagar reinforcementlearningversusswarmintelligenceforautonomousmultihapscoordination