Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence
In mobile edge computing networks, achieving effective load balancing across edge server nodes is essential for minimizing task processing latency. However, the lack of a priori knowledge regarding the current load state of edge nodes for user devices presents a significant challenge in multi-user,...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-02-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/5/1491 |
_version_ | 1826531367663435776 |
---|---|
author | Wen Chen Sibin Liu Yuxiao Yang Wenjing Hu Jinming Yu |
author_facet | Wen Chen Sibin Liu Yuxiao Yang Wenjing Hu Jinming Yu |
author_sort | Wen Chen |
collection | DOAJ |
description | In mobile edge computing networks, achieving effective load balancing across edge server nodes is essential for minimizing task processing latency. However, the lack of a priori knowledge regarding the current load state of edge nodes for user devices presents a significant challenge in multi-user, multi-edge node scenarios. This challenge is exacerbated by the inherent dynamics and uncertainty of edge node load variations. To tackle these issues, we propose a deep reinforcement learning-based approach for task offloading and resource allocation, aiming to balance the load on edge nodes while reducing the long-term average cost. Specifically, we decompose the optimization problem into two subproblems, task offloading and resource allocation. The Karush–Kuhn–Tucker (KKT) conditions are employed to derive the optimal strategy for communication bandwidth and computational resource allocation for edge nodes. We utilize Long Short-Term Memory (LSTM) networks to forecast the real-time activity of edge nodes. Additionally, we integrate deep compression techniques to expedite model convergence, facilitating faster execution on user devices. Our simulation results demonstrate that our proposed scheme achieves a 47% reduction in terms of the task drop rate, a 14% decrease in the total system cost, and a 7.6% improvement in the runtime compared to the baseline schemes. |
first_indexed | 2025-03-14T01:34:13Z |
format | Article |
id | doaj.art-c833566cda6a4f20843316e5bd10dbd3 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2025-03-14T01:34:13Z |
publishDate | 2025-02-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-c833566cda6a4f20843316e5bd10dbd32025-03-12T13:59:59ZengMDPI AGSensors1424-82202025-02-01255149110.3390/s25051491Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model ConvergenceWen Chen0Sibin Liu1Yuxiao Yang2Wenjing Hu3Jinming Yu4School of Information Science and Technology, Donghua University, Shanghai 201620, ChinaSchool of Information Science and Technology, Donghua University, Shanghai 201620, ChinaSchool of Information Science and Technology, Donghua University, Shanghai 201620, ChinaSchool of Information Science and Technology, Donghua University, Shanghai 201620, ChinaSchool of Information Science and Technology, Donghua University, Shanghai 201620, ChinaIn mobile edge computing networks, achieving effective load balancing across edge server nodes is essential for minimizing task processing latency. However, the lack of a priori knowledge regarding the current load state of edge nodes for user devices presents a significant challenge in multi-user, multi-edge node scenarios. This challenge is exacerbated by the inherent dynamics and uncertainty of edge node load variations. To tackle these issues, we propose a deep reinforcement learning-based approach for task offloading and resource allocation, aiming to balance the load on edge nodes while reducing the long-term average cost. Specifically, we decompose the optimization problem into two subproblems, task offloading and resource allocation. The Karush–Kuhn–Tucker (KKT) conditions are employed to derive the optimal strategy for communication bandwidth and computational resource allocation for edge nodes. We utilize Long Short-Term Memory (LSTM) networks to forecast the real-time activity of edge nodes. Additionally, we integrate deep compression techniques to expedite model convergence, facilitating faster execution on user devices. Our simulation results demonstrate that our proposed scheme achieves a 47% reduction in terms of the task drop rate, a 14% decrease in the total system cost, and a 7.6% improvement in the runtime compared to the baseline schemes.https://www.mdpi.com/1424-8220/25/5/1491deep reinforcement learningload balancingmobile edge computingresource allocationtask offloading |
spellingShingle | Wen Chen Sibin Liu Yuxiao Yang Wenjing Hu Jinming Yu Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence Sensors deep reinforcement learning load balancing mobile edge computing resource allocation task offloading |
title | Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence |
title_full | Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence |
title_fullStr | Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence |
title_full_unstemmed | Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence |
title_short | Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence |
title_sort | dynamic edge loading balancing with edge node activity prediction and accelerating the model convergence |
topic | deep reinforcement learning load balancing mobile edge computing resource allocation task offloading |
url | https://www.mdpi.com/1424-8220/25/5/1491 |
work_keys_str_mv | AT wenchen dynamicedgeloadingbalancingwithedgenodeactivitypredictionandacceleratingthemodelconvergence AT sibinliu dynamicedgeloadingbalancingwithedgenodeactivitypredictionandacceleratingthemodelconvergence AT yuxiaoyang dynamicedgeloadingbalancingwithedgenodeactivitypredictionandacceleratingthemodelconvergence AT wenjinghu dynamicedgeloadingbalancingwithedgenodeactivitypredictionandacceleratingthemodelconvergence AT jinmingyu dynamicedgeloadingbalancingwithedgenodeactivitypredictionandacceleratingthemodelconvergence |