Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning

Users in heterogeneous wireless networks may generate massive amounts of data that are delay-sensitive or require computation-intensive processing. Owing to computation ability and battery capacity limitations, wireless users (WUs) cannot easily process such data in a timely manner, and mobile edge...

Full description

Bibliographic Details
Main Authors: Hui Wang, Hongchang Ke, Weijia Sun
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9212373/
_version_ 1818433214006427648
author Hui Wang
Hongchang Ke
Weijia Sun
author_facet Hui Wang
Hongchang Ke
Weijia Sun
author_sort Hui Wang
collection DOAJ
description Users in heterogeneous wireless networks may generate massive amounts of data that are delay-sensitive or require computation-intensive processing. Owing to computation ability and battery capacity limitations, wireless users (WUs) cannot easily process such data in a timely manner, and mobile edge computing (MEC) is increasingly being used to resolve this issue. Specifically, data generated by WUs can be offloaded to the MEC server for processing, which has greater computing power than WUs. However, as the location of MEC servers is fixed, unmanned aerial vehicles (UAVs) have been considered a promising solution in heterogeneous wireless networks. In this study, we design an UAV-assisted computation offloading scheme in an MEC framework with renewable power supply. The proposed model considers the instability of energy arrival, stochastic computation tasks generated by WUs, and a time-varying channel state. Owing to the complexity of the state, it is difficult to use traditional Markov decision process (MDP) with complete prior knowledge for offloading optimization. Accordingly, we propose UAV-assisted computation offloading for MEC based on deep reinforcement learning (UACODRL) to minimize the total cost, which is the weighted sum of the delay, energy consumption, and bandwidth cost. We first use the K-Means algorithm for classification to reduce the dimension of the action space. Subsequently, we use UACODRL to find the near-optimal offloading scheme to minimize the total cost. Simulations demonstrate that UACODRL converges satisfactorily and performs better than four baseline schemes with different parameter configurations.
first_indexed 2024-12-14T16:17:32Z
format Article
id doaj.art-56a1b74b862d47fb8ae3356b4ee0f8cb
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-14T16:17:32Z
publishDate 2020-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-56a1b74b862d47fb8ae3356b4ee0f8cb2022-12-21T22:54:53ZengIEEEIEEE Access2169-35362020-01-01818078418079810.1109/ACCESS.2020.30285539212373Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement LearningHui Wang0https://orcid.org/0000-0001-5074-900XHongchang Ke1https://orcid.org/0000-0003-0946-9289Weijia Sun2https://orcid.org/0000-0003-0738-3617College of Computer Science and Engineering, Changchun University of Technology, Changchun, ChinaSchool of Computer Technology and Engineering, Changchun Institute of Technology, Changchun, ChinaCollege of Computer Science and Engineering, Changchun University of Technology, Changchun, ChinaUsers in heterogeneous wireless networks may generate massive amounts of data that are delay-sensitive or require computation-intensive processing. Owing to computation ability and battery capacity limitations, wireless users (WUs) cannot easily process such data in a timely manner, and mobile edge computing (MEC) is increasingly being used to resolve this issue. Specifically, data generated by WUs can be offloaded to the MEC server for processing, which has greater computing power than WUs. However, as the location of MEC servers is fixed, unmanned aerial vehicles (UAVs) have been considered a promising solution in heterogeneous wireless networks. In this study, we design an UAV-assisted computation offloading scheme in an MEC framework with renewable power supply. The proposed model considers the instability of energy arrival, stochastic computation tasks generated by WUs, and a time-varying channel state. Owing to the complexity of the state, it is difficult to use traditional Markov decision process (MDP) with complete prior knowledge for offloading optimization. Accordingly, we propose UAV-assisted computation offloading for MEC based on deep reinforcement learning (UACODRL) to minimize the total cost, which is the weighted sum of the delay, energy consumption, and bandwidth cost. We first use the K-Means algorithm for classification to reduce the dimension of the action space. Subsequently, we use UACODRL to find the near-optimal offloading scheme to minimize the total cost. Simulations demonstrate that UACODRL converges satisfactorily and performs better than four baseline schemes with different parameter configurations.https://ieeexplore.ieee.org/document/9212373/Mobile edge computingunmanned aerial vehiclecomputation offloadingdeep reinforcement learning
spellingShingle Hui Wang
Hongchang Ke
Weijia Sun
Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
IEEE Access
Mobile edge computing
unmanned aerial vehicle
computation offloading
deep reinforcement learning
title Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
title_full Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
title_fullStr Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
title_full_unstemmed Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
title_short Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
title_sort unmanned aerial vehicle assisted computation offloading for mobile edge computing based on deep reinforcement learning
topic Mobile edge computing
unmanned aerial vehicle
computation offloading
deep reinforcement learning
url https://ieeexplore.ieee.org/document/9212373/
work_keys_str_mv AT huiwang unmannedaerialvehicleassistedcomputationoffloadingformobileedgecomputingbasedondeepreinforcementlearning
AT hongchangke unmannedaerialvehicleassistedcomputationoffloadingformobileedgecomputingbasedondeepreinforcementlearning
AT weijiasun unmannedaerialvehicleassistedcomputationoffloadingformobileedgecomputingbasedondeepreinforcementlearning