DRLLA: Deep Reinforcement Learning for Link Adaptation

Link adaptation (LA) matches transmission parameters to conditions on the radio link, and therefore plays a major role in telecommunications. Improving LA is within the requirements for next-generation mobile telecommunication systems, and by refining link adaptation, a higher channel efficiency can...

Full description

Bibliographic Details
Main Authors: Florian Geiser, Daniel Wessel, Matthias Hummert, Andreas Weber, Dirk Wübben, Armin Dekorsy, Alberto Viseras
Format: Article
Language:English
Published: MDPI AG 2022-11-01
Series:Telecom
Subjects:
Online Access:https://www.mdpi.com/2673-4001/3/4/37
_version_ 1797455134774525952
author Florian Geiser
Daniel Wessel
Matthias Hummert
Andreas Weber
Dirk Wübben
Armin Dekorsy
Alberto Viseras
author_facet Florian Geiser
Daniel Wessel
Matthias Hummert
Andreas Weber
Dirk Wübben
Armin Dekorsy
Alberto Viseras
author_sort Florian Geiser
collection DOAJ
description Link adaptation (LA) matches transmission parameters to conditions on the radio link, and therefore plays a major role in telecommunications. Improving LA is within the requirements for next-generation mobile telecommunication systems, and by refining link adaptation, a higher channel efficiency can be achieved (i.e., an increased data rate thanks to lower required bandwidth). Furthermore, by replacing traditional LA algorithms, radio transmission systems can better adapt themselves to a dynamic environment. There are several drawbacks to current state-of-the-art approaches, including predefined and static decision boundaries or relying on a single, low-dimensional metric. Nowadays, a broadly used approach to handle a variety of related input variables is a neural network (NN). NNs are able to make use of multiple inputs, and when combined with reinforcement learning (RL), the so-called deep reinforcement learning (DRL) approach emerges. Using DRL, more complex parameter relationships can be considered in order to recommend the modulation and coding scheme (MCS) used in LA. Hence, this work examines the potential of DRL and includes experiments on different channels. The main contribution of this work lies in using DRL algorithms for LA, optimized for throughput based on a subcarrier observation matrix and a packet success rate feedback system. We apply Natural Actor-Critic (NAC) and Proximal Policy Optimization (PPO) algorithms on simulated channels with a subsequent feasibility study on a prerecorded real-world channel. Empirical results produced by experiments on the examined channels hint that Deep Reinforcement Learning for Link Adaptation (DRLLA) offers good performance indicated by a promising data rate on the additive white Gaussian noise (AWGN) channel, the non-line-of-sight (NLOS) channel, and a prerecorded real-world channel. No matter the channel impairment, the agent is able to respond to changing signal-to-interference-plus-noise-ratio (SINR) levels, as exhibited by expected changes in the effective data rate.
first_indexed 2024-03-09T15:48:08Z
format Article
id doaj.art-e184dfc8971a4edf9d44c8671c9c1436
institution Directory Open Access Journal
issn 2673-4001
language English
last_indexed 2024-03-09T15:48:08Z
publishDate 2022-11-01
publisher MDPI AG
record_format Article
series Telecom
spelling doaj.art-e184dfc8971a4edf9d44c8671c9c14362023-11-24T18:23:25ZengMDPI AGTelecom2673-40012022-11-013469270510.3390/telecom3040037DRLLA: Deep Reinforcement Learning for Link AdaptationFlorian Geiser0Daniel Wessel1Matthias Hummert2Andreas Weber3Dirk Wübben4Armin Dekorsy5Alberto Viseras6Motius, 80807 München, GermanyMotius, 80807 München, GermanyDepartment of Communications Engineering, University of Bremen, 28359 Bremen, GermanyNokia Bell Labs, 81541 München, GermanyDepartment of Communications Engineering, University of Bremen, 28359 Bremen, GermanyDepartment of Communications Engineering, University of Bremen, 28359 Bremen, GermanyMotius, 80807 München, GermanyLink adaptation (LA) matches transmission parameters to conditions on the radio link, and therefore plays a major role in telecommunications. Improving LA is within the requirements for next-generation mobile telecommunication systems, and by refining link adaptation, a higher channel efficiency can be achieved (i.e., an increased data rate thanks to lower required bandwidth). Furthermore, by replacing traditional LA algorithms, radio transmission systems can better adapt themselves to a dynamic environment. There are several drawbacks to current state-of-the-art approaches, including predefined and static decision boundaries or relying on a single, low-dimensional metric. Nowadays, a broadly used approach to handle a variety of related input variables is a neural network (NN). NNs are able to make use of multiple inputs, and when combined with reinforcement learning (RL), the so-called deep reinforcement learning (DRL) approach emerges. Using DRL, more complex parameter relationships can be considered in order to recommend the modulation and coding scheme (MCS) used in LA. Hence, this work examines the potential of DRL and includes experiments on different channels. The main contribution of this work lies in using DRL algorithms for LA, optimized for throughput based on a subcarrier observation matrix and a packet success rate feedback system. We apply Natural Actor-Critic (NAC) and Proximal Policy Optimization (PPO) algorithms on simulated channels with a subsequent feasibility study on a prerecorded real-world channel. Empirical results produced by experiments on the examined channels hint that Deep Reinforcement Learning for Link Adaptation (DRLLA) offers good performance indicated by a promising data rate on the additive white Gaussian noise (AWGN) channel, the non-line-of-sight (NLOS) channel, and a prerecorded real-world channel. No matter the channel impairment, the agent is able to respond to changing signal-to-interference-plus-noise-ratio (SINR) levels, as exhibited by expected changes in the effective data rate.https://www.mdpi.com/2673-4001/3/4/37machine learningmobile communicationreinforcement learninglink adaptationchannel observation
spellingShingle Florian Geiser
Daniel Wessel
Matthias Hummert
Andreas Weber
Dirk Wübben
Armin Dekorsy
Alberto Viseras
DRLLA: Deep Reinforcement Learning for Link Adaptation
Telecom
machine learning
mobile communication
reinforcement learning
link adaptation
channel observation
title DRLLA: Deep Reinforcement Learning for Link Adaptation
title_full DRLLA: Deep Reinforcement Learning for Link Adaptation
title_fullStr DRLLA: Deep Reinforcement Learning for Link Adaptation
title_full_unstemmed DRLLA: Deep Reinforcement Learning for Link Adaptation
title_short DRLLA: Deep Reinforcement Learning for Link Adaptation
title_sort drlla deep reinforcement learning for link adaptation
topic machine learning
mobile communication
reinforcement learning
link adaptation
channel observation
url https://www.mdpi.com/2673-4001/3/4/37
work_keys_str_mv AT floriangeiser drlladeepreinforcementlearningforlinkadaptation
AT danielwessel drlladeepreinforcementlearningforlinkadaptation
AT matthiashummert drlladeepreinforcementlearningforlinkadaptation
AT andreasweber drlladeepreinforcementlearningforlinkadaptation
AT dirkwubben drlladeepreinforcementlearningforlinkadaptation
AT armindekorsy drlladeepreinforcementlearningforlinkadaptation
AT albertoviseras drlladeepreinforcementlearningforlinkadaptation