Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks

Reinforcement Learning (RL) has emerged as a promising approach for improving the performance of Wireless Sensor Networks (WSNs). The Q-learning technique is one approach of RL in which the algorithm continuously learns by interacting with the environment, gathering information to take certain actio...

Full description

Bibliographic Details
Main Authors: Oday Al-Jerew, Nizar Al Bassam, Abeer Alsadoon
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10049537/
_version_ 1811160960199557120
author Oday Al-Jerew
Nizar Al Bassam
Abeer Alsadoon
author_facet Oday Al-Jerew
Nizar Al Bassam
Abeer Alsadoon
author_sort Oday Al-Jerew
collection DOAJ
description Reinforcement Learning (RL) has emerged as a promising approach for improving the performance of Wireless Sensor Networks (WSNs). The Q-learning technique is one approach of RL in which the algorithm continuously learns by interacting with the environment, gathering information to take certain actions. It maximizes performance by determining the optimal result from that environment. In this paper, we propose a data gathering algorithm based on a Q-learning approach named Bounded Hop Count - Reinforcement Learning Algorithm (BHC-RLA). The proposed algorithm uses a reward function to select a set of Cluster Heads (CHs) to balance between the energy-saving and data-gathering latency of a mobile Base Station (BS). In particular, the proposed algorithm selects groups of CHs to receive sensing data of cluster nodes within a bounded hop count and forward the data to the mobile BS when it arrives. In addition, the CHs are selected to minimize the BS tour length. Extensive experiments by simulation were conducted to evaluate the performance of the proposed algorithm against another traditional heuristic algorithm. We demonstrate that the proposed algorithm outperforms the existing work in the mean of the length of a mobile BS tour and a network’s lifetime.
first_indexed 2024-04-10T06:06:32Z
format Article
id doaj.art-b81b6069dc864b4a924ba085b37d8cc4
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-10T06:06:32Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-b81b6069dc864b4a924ba085b37d8cc42023-03-03T00:01:21ZengIEEEIEEE Access2169-35362023-01-0111198191983510.1109/ACCESS.2023.324757610049537Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor NetworksOday Al-Jerew0https://orcid.org/0000-0003-0245-3284Nizar Al Bassam1https://orcid.org/0000-0001-6642-9174Abeer Alsadoon2Asia Pacific International College, Sydney, NSW, AustraliaMiddle East College, Muscat, OmanAsia Pacific International College, Sydney, NSW, AustraliaReinforcement Learning (RL) has emerged as a promising approach for improving the performance of Wireless Sensor Networks (WSNs). The Q-learning technique is one approach of RL in which the algorithm continuously learns by interacting with the environment, gathering information to take certain actions. It maximizes performance by determining the optimal result from that environment. In this paper, we propose a data gathering algorithm based on a Q-learning approach named Bounded Hop Count - Reinforcement Learning Algorithm (BHC-RLA). The proposed algorithm uses a reward function to select a set of Cluster Heads (CHs) to balance between the energy-saving and data-gathering latency of a mobile Base Station (BS). In particular, the proposed algorithm selects groups of CHs to receive sensing data of cluster nodes within a bounded hop count and forward the data to the mobile BS when it arrives. In addition, the CHs are selected to minimize the BS tour length. Extensive experiments by simulation were conducted to evaluate the performance of the proposed algorithm against another traditional heuristic algorithm. We demonstrate that the proposed algorithm outperforms the existing work in the mean of the length of a mobile BS tour and a network’s lifetime.https://ieeexplore.ieee.org/document/10049537/Wireless sensor networksmobile data gatheringdelay tolerancerelay hop countmobile base station tour
spellingShingle Oday Al-Jerew
Nizar Al Bassam
Abeer Alsadoon
Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks
IEEE Access
Wireless sensor networks
mobile data gathering
delay tolerance
relay hop count
mobile base station tour
title Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks
title_full Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks
title_fullStr Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks
title_full_unstemmed Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks
title_short Reinforcement Learning for Delay Tolerance and Energy Saving in Mobile Wireless Sensor Networks
title_sort reinforcement learning for delay tolerance and energy saving in mobile wireless sensor networks
topic Wireless sensor networks
mobile data gathering
delay tolerance
relay hop count
mobile base station tour
url https://ieeexplore.ieee.org/document/10049537/
work_keys_str_mv AT odayaljerew reinforcementlearningfordelaytoleranceandenergysavinginmobilewirelesssensornetworks
AT nizaralbassam reinforcementlearningfordelaytoleranceandenergysavinginmobilewirelesssensornetworks
AT abeeralsadoon reinforcementlearningfordelaytoleranceandenergysavinginmobilewirelesssensornetworks