Multi-Model Running Latency Optimization in an Edge Computing Paradigm
Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone ta...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-08-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/16/6097 |
_version_ | 1797408051873972224 |
---|---|
author | Peisong Li Xinheng Wang Kaizhu Huang Yi Huang Shancang Li Muddesar Iqbal |
author_facet | Peisong Li Xinheng Wang Kaizhu Huang Yi Huang Shancang Li Muddesar Iqbal |
author_sort | Peisong Li |
collection | DOAJ |
description | Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone task. However, the high overall running latency for performing multi-model inferences always negatively affects the real-time applications. To combat latency, the algorithms should be optimized to minimize the latency for multi-model deployment without compromising the safety-critical situation. This work focuses on the real-time task scheduling strategy for multi-model deployment and investigating the model inference using an open neural network exchange (ONNX) runtime engine. Then, an application deployment strategy is proposed based on the container technology and inference tasks are scheduled to different containers based on the scheduling strategies. Experimental results show that the proposed solution is able to significantly reduce the overall running latency in real-time applications. |
first_indexed | 2024-03-09T03:51:46Z |
format | Article |
id | doaj.art-b150b143e25b494abd7e14e69f13cc53 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T03:51:46Z |
publishDate | 2022-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-b150b143e25b494abd7e14e69f13cc532023-12-03T14:26:23ZengMDPI AGSensors1424-82202022-08-012216609710.3390/s22166097Multi-Model Running Latency Optimization in an Edge Computing ParadigmPeisong Li0Xinheng Wang1Kaizhu Huang2Yi Huang3Shancang Li4Muddesar Iqbal5School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215123, ChinaSchool of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215123, ChinaData Science Research Center, Division of Natural and Applied Sciences, Duke Kunshan University, Suzhou 215316, ChinaDepartment of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3BX, UKSchool of Computer Science and Informatics, Cardiff University, Cardiff CF10 3AT, UKRenewable Energy Laboratory, Communications and Networks Engineering Department, College of Engineering, Prince Sultan University, Riyadh 11586, Saudi ArabiaRecent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone task. However, the high overall running latency for performing multi-model inferences always negatively affects the real-time applications. To combat latency, the algorithms should be optimized to minimize the latency for multi-model deployment without compromising the safety-critical situation. This work focuses on the real-time task scheduling strategy for multi-model deployment and investigating the model inference using an open neural network exchange (ONNX) runtime engine. Then, an application deployment strategy is proposed based on the container technology and inference tasks are scheduled to different containers based on the scheduling strategies. Experimental results show that the proposed solution is able to significantly reduce the overall running latency in real-time applications.https://www.mdpi.com/1424-8220/22/16/6097edge computinglatency optimizationmulti-modeltask schedulingautonomous drivingAI |
spellingShingle | Peisong Li Xinheng Wang Kaizhu Huang Yi Huang Shancang Li Muddesar Iqbal Multi-Model Running Latency Optimization in an Edge Computing Paradigm Sensors edge computing latency optimization multi-model task scheduling autonomous driving AI |
title | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_full | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_fullStr | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_full_unstemmed | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_short | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_sort | multi model running latency optimization in an edge computing paradigm |
topic | edge computing latency optimization multi-model task scheduling autonomous driving AI |
url | https://www.mdpi.com/1424-8220/22/16/6097 |
work_keys_str_mv | AT peisongli multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT xinhengwang multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT kaizhuhuang multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT yihuang multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT shancangli multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT muddesariqbal multimodelrunninglatencyoptimizationinanedgecomputingparadigm |