Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods

Abstract With the rapid development of communication technologies, the quality of our daily life has been improved with the applications of smart communications and networking, such as intelligent transportation and mobile service computing. However, high user demands for quality of service (QoS) ar...

Full description

Bibliographic Details
Main Authors: Ting Wang, Xiong Luo, Wenbing Zhao
Format: Article
Language:English
Published: Wiley 2022-06-01
Series:IET Communications
Online Access:https://doi.org/10.1049/cmu2.12334
_version_ 1828240871360823296
author Ting Wang
Xiong Luo
Wenbing Zhao
author_facet Ting Wang
Xiong Luo
Wenbing Zhao
author_sort Ting Wang
collection DOAJ
description Abstract With the rapid development of communication technologies, the quality of our daily life has been improved with the applications of smart communications and networking, such as intelligent transportation and mobile service computing. However, high user demands for quality of service (QoS) are forcing intelligent transportation to continuously improve immediacy and reduce the tasks offloading delay for the internet of vehicles (IoV). To meet the low latency of vehicle tasks offloading, an offloading scheme combining mobile edge computing (MEC) and deep reinforcement learning (DRL), is proposed in this article. Firstly, a realistic map is simulated, while initializing the tasks queue and building a tasks offloading environment with multiple service nodes. Then, an algorithm that combines deep learning with reinforcement learning, that is, the deep Q‐learning network (DQN) algorithm, is developed to optimize the offloading scheme by reducing the offload latency. Finally, given that the complete information cannot be observed effectively in the environment, a long short‐term memory (LSTM) model is applied within the DQN to train its neural network to improve offloading efficiency. The simulation results show that the MEC‐based vehicle tasks offloading can effectively reduce the latency of vehicle offloading.
first_indexed 2024-04-12T21:48:14Z
format Article
id doaj.art-03245701ffaa4709ab81acbf8ff1557a
institution Directory Open Access Journal
issn 1751-8628
1751-8636
language English
last_indexed 2024-04-12T21:48:14Z
publishDate 2022-06-01
publisher Wiley
record_format Article
series IET Communications
spelling doaj.art-03245701ffaa4709ab81acbf8ff1557a2022-12-22T03:15:33ZengWileyIET Communications1751-86281751-86362022-06-0116101230124010.1049/cmu2.12334Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methodsTing Wang0Xiong Luo1Wenbing Zhao2School of Computer and Communication Engineering University of Science and Technology Beijing Beijing ChinaSchool of Computer and Communication Engineering University of Science and Technology Beijing Beijing ChinaDepartment of Electrical Engineering and Computer Science Cleveland State University Cleveland Ohio USAAbstract With the rapid development of communication technologies, the quality of our daily life has been improved with the applications of smart communications and networking, such as intelligent transportation and mobile service computing. However, high user demands for quality of service (QoS) are forcing intelligent transportation to continuously improve immediacy and reduce the tasks offloading delay for the internet of vehicles (IoV). To meet the low latency of vehicle tasks offloading, an offloading scheme combining mobile edge computing (MEC) and deep reinforcement learning (DRL), is proposed in this article. Firstly, a realistic map is simulated, while initializing the tasks queue and building a tasks offloading environment with multiple service nodes. Then, an algorithm that combines deep learning with reinforcement learning, that is, the deep Q‐learning network (DQN) algorithm, is developed to optimize the offloading scheme by reducing the offload latency. Finally, given that the complete information cannot be observed effectively in the environment, a long short‐term memory (LSTM) model is applied within the DQN to train its neural network to improve offloading efficiency. The simulation results show that the MEC‐based vehicle tasks offloading can effectively reduce the latency of vehicle offloading.https://doi.org/10.1049/cmu2.12334
spellingShingle Ting Wang
Xiong Luo
Wenbing Zhao
Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
IET Communications
title Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
title_full Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
title_fullStr Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
title_full_unstemmed Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
title_short Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
title_sort improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods
url https://doi.org/10.1049/cmu2.12334
work_keys_str_mv AT tingwang improvingtheperformanceoftasksoffloadingforinternetofvehiclesviadeepreinforcementlearningmethods
AT xiongluo improvingtheperformanceoftasksoffloadingforinternetofvehiclesviadeepreinforcementlearningmethods
AT wenbingzhao improvingtheperformanceoftasksoffloadingforinternetofvehiclesviadeepreinforcementlearningmethods