Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements

In light of the quick proliferation of Internet of things (IoT) devices and applications, fog radio access network (Fog-RAN) has been recently proposed for fifth generation (5G) wireless communications to assure the requirements of ultra-reliable low-latency communication (URLLC) for the IoT applica...

Full description

Bibliographic Details
Main Authors: Almuthanna Nassar, Yasin Yilmaz
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8825838/
_version_ 1818621735243612160
author Almuthanna Nassar
Yasin Yilmaz
author_facet Almuthanna Nassar
Yasin Yilmaz
author_sort Almuthanna Nassar
collection DOAJ
description In light of the quick proliferation of Internet of things (IoT) devices and applications, fog radio access network (Fog-RAN) has been recently proposed for fifth generation (5G) wireless communications to assure the requirements of ultra-reliable low-latency communication (URLLC) for the IoT applications which cannot accommodate large delays. To this end, fog nodes (FNs) are equipped with computing, signal processing and storage capabilities to extend the inherent operations and services of the cloud to the edge. We consider the problem of sequentially allocating the FN's limited resources to IoT applications of heterogeneous latency requirements. For each access request from an IoT user, the FN needs to decide whether to serve it locally at the edge utilizing its own resources or to refer it to the cloud to conserve its valuable resources for future users of potentially higher utility to the system (i.e., lower latency requirement). We formulate the Fog-RAN resource allocation problem in the form of a Markov decision process (MDP), and employ several reinforcement learning (RL) methods, namely Q-learning, SARSA, Expected SARSA, and Monte Carlo, for solving the MDP problem by learning the optimum decision-making policies. We verify the performance and adaptivity of the RL methods and compare it with the performance of the network slicing approach with various slicing thresholds. Extensive simulation results considering 19 IoT environments of heterogeneous latency requirements corroborate that RL methods always achieve the best possible performance regardless of the IoT environment.
first_indexed 2024-12-16T18:14:00Z
format Article
id doaj.art-adaff81fc06f431eafc32f97e4f39d7b
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-16T18:14:00Z
publishDate 2019-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-adaff81fc06f431eafc32f97e4f39d7b2022-12-21T22:21:42ZengIEEEIEEE Access2169-35362019-01-01712801412802510.1109/ACCESS.2019.29397358825838Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency RequirementsAlmuthanna Nassar0https://orcid.org/0000-0002-0774-9183Yasin Yilmaz1Electrical Engineering Department, University of South Florida, Tampa, FL, USAElectrical Engineering Department, University of South Florida, Tampa, FL, USAIn light of the quick proliferation of Internet of things (IoT) devices and applications, fog radio access network (Fog-RAN) has been recently proposed for fifth generation (5G) wireless communications to assure the requirements of ultra-reliable low-latency communication (URLLC) for the IoT applications which cannot accommodate large delays. To this end, fog nodes (FNs) are equipped with computing, signal processing and storage capabilities to extend the inherent operations and services of the cloud to the edge. We consider the problem of sequentially allocating the FN's limited resources to IoT applications of heterogeneous latency requirements. For each access request from an IoT user, the FN needs to decide whether to serve it locally at the edge utilizing its own resources or to refer it to the cloud to conserve its valuable resources for future users of potentially higher utility to the system (i.e., lower latency requirement). We formulate the Fog-RAN resource allocation problem in the form of a Markov decision process (MDP), and employ several reinforcement learning (RL) methods, namely Q-learning, SARSA, Expected SARSA, and Monte Carlo, for solving the MDP problem by learning the optimum decision-making policies. We verify the performance and adaptivity of the RL methods and compare it with the performance of the network slicing approach with various slicing thresholds. Extensive simulation results considering 19 IoT environments of heterogeneous latency requirements corroborate that RL methods always achieve the best possible performance regardless of the IoT environment.https://ieeexplore.ieee.org/document/8825838/Resource allocationfog RAN5G cellular networkslow-latency communicationsIoTMarkov decision process
spellingShingle Almuthanna Nassar
Yasin Yilmaz
Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
IEEE Access
Resource allocation
fog RAN
5G cellular networks
low-latency communications
IoT
Markov decision process
title Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
title_full Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
title_fullStr Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
title_full_unstemmed Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
title_short Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
title_sort reinforcement learning for adaptive resource allocation in fog ran for iot with heterogeneous latency requirements
topic Resource allocation
fog RAN
5G cellular networks
low-latency communications
IoT
Markov decision process
url https://ieeexplore.ieee.org/document/8825838/
work_keys_str_mv AT almuthannanassar reinforcementlearningforadaptiveresourceallocationinfogranforiotwithheterogeneouslatencyrequirements
AT yasinyilmaz reinforcementlearningforadaptiveresourceallocationinfogranforiotwithheterogeneouslatencyrequirements