LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm

In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To evaluate the effici...

Full description

Bibliographic Details
Main Authors: Benyamin Teymuri, Reza Serati, Nikolaos Athanasios Anagnostopoulos, Mehdi Rasti
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/4/2363
_version_ 1797618238054465536
author Benyamin Teymuri
Reza Serati
Nikolaos Athanasios Anagnostopoulos
Mehdi Rasti
author_facet Benyamin Teymuri
Reza Serati
Nikolaos Athanasios Anagnostopoulos
Mehdi Rasti
author_sort Benyamin Teymuri
collection DOAJ
description In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To evaluate the efficiency of the LoRa Wide-Area Network (LoRaWAN), three criteria can be considered, namely, the Packet Delivery Rate (PDR), Energy Consumption (EC), and coverage area. A set of transmission parameters have to be configured to establish a communication link. These parameters can affect the data rate, noise resistance, receiver sensitivity, and EC. The Adaptive Data Rate (ADR) algorithm is a mechanism to configure the transmission parameters of EDs aiming to improve the PDR. Therefore, we introduce a new algorithm using the Multi-Armed Bandit (MAB) technique, to configure the EDs’ transmission parameters in a centralized manner on the Network Server (NS) side, while improving the EC, too. The performance of the proposed algorithm, the Low-Power Multi-Armed Bandit (LP-MAB), is evaluated through simulation results and is compared with other approaches in different scenarios. The simulation results indicate that the LP-MAB’s EC outperforms other algorithms while maintaining a relatively high PDR in various circumstances.
first_indexed 2024-03-11T08:10:15Z
format Article
id doaj.art-d198c4950139430ea000bfce1c367f55
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-11T08:10:15Z
publishDate 2023-02-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-d198c4950139430ea000bfce1c367f552023-11-16T23:13:49ZengMDPI AGSensors1424-82202023-02-01234236310.3390/s23042363LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration AlgorithmBenyamin Teymuri0Reza Serati1Nikolaos Athanasios Anagnostopoulos2Mehdi Rasti3Department of Computer Engineering, Amirkabir University of Technology, Tehran P.O. Box 15875-4413, IranDepartment of Computer Engineering, Amirkabir University of Technology, Tehran P.O. Box 15875-4413, IranFaculty of Computer Science and Mathematics, University of Passau, 94032 Passau, GermanyDepartment of Computer Engineering, Amirkabir University of Technology, Tehran P.O. Box 15875-4413, IranIn the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To evaluate the efficiency of the LoRa Wide-Area Network (LoRaWAN), three criteria can be considered, namely, the Packet Delivery Rate (PDR), Energy Consumption (EC), and coverage area. A set of transmission parameters have to be configured to establish a communication link. These parameters can affect the data rate, noise resistance, receiver sensitivity, and EC. The Adaptive Data Rate (ADR) algorithm is a mechanism to configure the transmission parameters of EDs aiming to improve the PDR. Therefore, we introduce a new algorithm using the Multi-Armed Bandit (MAB) technique, to configure the EDs’ transmission parameters in a centralized manner on the Network Server (NS) side, while improving the EC, too. The performance of the proposed algorithm, the Low-Power Multi-Armed Bandit (LP-MAB), is evaluated through simulation results and is compared with other approaches in different scenarios. The simulation results indicate that the LP-MAB’s EC outperforms other algorithms while maintaining a relatively high PDR in various circumstances.https://www.mdpi.com/1424-8220/23/4/2363Internet of Things (IoT)LoRaWANadaptive configurationmachine learningreinforcement learning
spellingShingle Benyamin Teymuri
Reza Serati
Nikolaos Athanasios Anagnostopoulos
Mehdi Rasti
LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
Sensors
Internet of Things (IoT)
LoRaWAN
adaptive configuration
machine learning
reinforcement learning
title LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
title_full LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
title_fullStr LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
title_full_unstemmed LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
title_short LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
title_sort lp mab improving the energy efficiency of lorawan using a reinforcement learning based adaptive configuration algorithm
topic Internet of Things (IoT)
LoRaWAN
adaptive configuration
machine learning
reinforcement learning
url https://www.mdpi.com/1424-8220/23/4/2363
work_keys_str_mv AT benyaminteymuri lpmabimprovingtheenergyefficiencyoflorawanusingareinforcementlearningbasedadaptiveconfigurationalgorithm
AT rezaserati lpmabimprovingtheenergyefficiencyoflorawanusingareinforcementlearningbasedadaptiveconfigurationalgorithm
AT nikolaosathanasiosanagnostopoulos lpmabimprovingtheenergyefficiencyoflorawanusingareinforcementlearningbasedadaptiveconfigurationalgorithm
AT mehdirasti lpmabimprovingtheenergyefficiencyoflorawanusingareinforcementlearningbasedadaptiveconfigurationalgorithm