Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment

The recent emergence of sixth-generation (6G) enabled wireless communication technology has resulted in the rapid proliferation of a wide range of real-time applications. These applications are highly data-computation intensive and generate huge data traffic. Cybertwin-driven edge computing emerges...

Full description

Bibliographic Details
Main Authors: Vibha Jain, Bijendra Kumar, Aditya Gupta
Format: Article
Language:English
Published: Elsevier 2022-09-01
Series:Journal of King Saud University: Computer and Information Sciences
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1319157822000386
_version_ 1817982236600827904
author Vibha Jain
Bijendra Kumar
Aditya Gupta
author_facet Vibha Jain
Bijendra Kumar
Aditya Gupta
author_sort Vibha Jain
collection DOAJ
description The recent emergence of sixth-generation (6G) enabled wireless communication technology has resulted in the rapid proliferation of a wide range of real-time applications. These applications are highly data-computation intensive and generate huge data traffic. Cybertwin-driven edge computing emerges as a promising solution to satisfy massive user demand, but it also introduces new challenges. One of the most difficult challenges in edge networks is efficiently offloading tasks while managing computation, communication, and cache resources. Traditional statistical optimization methods are incapable of addressing the offloading problem in a dynamic edge computing environment. In this work, we propose a joint resource allocation and computation offloading scheme by integrating deep reinforcement learning in Cybertwin enabled 6G wireless networks. The proposed system uses the potential of the MATD3 algorithm to provide QoS to end-users by minimizing the overall latency and energy consumption with better management of cache resources. As these edge resources are deployed in inaccessible locations, therefore, we employ secure authentication mechanism for Cybertwins. The proposed system is implemented in a simulated environment, and the results are calculated for different performance metrics with previous benchmark methodologies such as RRA, GRA, and MADDPG. The comparative analysis reveals that the proposed MATD3 reduces end-to-end latency and energy consumption by 13.8% and 12.5% respectively over MADDPG with a 4% increase in successful task completion.
first_indexed 2024-04-13T23:18:07Z
format Article
id doaj.art-0e0eb0a04e8b41b5a129108a75bf75c2
institution Directory Open Access Journal
issn 1319-1578
language English
last_indexed 2024-04-13T23:18:07Z
publishDate 2022-09-01
publisher Elsevier
record_format Article
series Journal of King Saud University: Computer and Information Sciences
spelling doaj.art-0e0eb0a04e8b41b5a129108a75bf75c22022-12-22T02:25:20ZengElsevierJournal of King Saud University: Computer and Information Sciences1319-15782022-09-0134857085720Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environmentVibha Jain0Bijendra Kumar1Aditya Gupta2Netaji Subhas University of Technology, New Delhi, IndiaNetaji Subhas University of Technology, New Delhi, IndiaSRM University, Delhi-NCR, Sonepat, India; Corresponding author.The recent emergence of sixth-generation (6G) enabled wireless communication technology has resulted in the rapid proliferation of a wide range of real-time applications. These applications are highly data-computation intensive and generate huge data traffic. Cybertwin-driven edge computing emerges as a promising solution to satisfy massive user demand, but it also introduces new challenges. One of the most difficult challenges in edge networks is efficiently offloading tasks while managing computation, communication, and cache resources. Traditional statistical optimization methods are incapable of addressing the offloading problem in a dynamic edge computing environment. In this work, we propose a joint resource allocation and computation offloading scheme by integrating deep reinforcement learning in Cybertwin enabled 6G wireless networks. The proposed system uses the potential of the MATD3 algorithm to provide QoS to end-users by minimizing the overall latency and energy consumption with better management of cache resources. As these edge resources are deployed in inaccessible locations, therefore, we employ secure authentication mechanism for Cybertwins. The proposed system is implemented in a simulated environment, and the results are calculated for different performance metrics with previous benchmark methodologies such as RRA, GRA, and MADDPG. The comparative analysis reveals that the proposed MATD3 reduces end-to-end latency and energy consumption by 13.8% and 12.5% respectively over MADDPG with a 4% increase in successful task completion.http://www.sciencedirect.com/science/article/pii/S1319157822000386Cybertwin6GResource allocationComputation offloadingDeep reinforcement learning
spellingShingle Vibha Jain
Bijendra Kumar
Aditya Gupta
Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment
Journal of King Saud University: Computer and Information Sciences
Cybertwin
6G
Resource allocation
Computation offloading
Deep reinforcement learning
title Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment
title_full Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment
title_fullStr Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment
title_full_unstemmed Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment
title_short Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment
title_sort cybertwin driven resource allocation using deep reinforcement learning in 6g enabled edge environment
topic Cybertwin
6G
Resource allocation
Computation offloading
Deep reinforcement learning
url http://www.sciencedirect.com/science/article/pii/S1319157822000386
work_keys_str_mv AT vibhajain cybertwindrivenresourceallocationusingdeepreinforcementlearningin6genablededgeenvironment
AT bijendrakumar cybertwindrivenresourceallocationusingdeepreinforcementlearningin6genablededgeenvironment
AT adityagupta cybertwindrivenresourceallocationusingdeepreinforcementlearningin6genablededgeenvironment