Neural architecture search for resource constrained hardware devices: A survey
Abstract With the emergence of powerful and low‐energy Internet of Things devices, deep learning computing is increasingly applied to resource‐constrained edge devices. However, the mismatch between hardware devices with low computing capacity and the increasing complexity of Deep Neural Network mod...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-09-01
|
Series: | IET Cyber-Physical Systems |
Subjects: | |
Online Access: | https://doi.org/10.1049/cps2.12058 |
_version_ | 1797689943225532416 |
---|---|
author | Yongjia Yang Jinyu Zhan Wei Jiang Yucheng Jiang Antai Yu |
author_facet | Yongjia Yang Jinyu Zhan Wei Jiang Yucheng Jiang Antai Yu |
author_sort | Yongjia Yang |
collection | DOAJ |
description | Abstract With the emergence of powerful and low‐energy Internet of Things devices, deep learning computing is increasingly applied to resource‐constrained edge devices. However, the mismatch between hardware devices with low computing capacity and the increasing complexity of Deep Neural Network models, as well as the growing real‐time requirements, bring challenges to the design and deployment of deep learning models. For example, autonomous driving technologies rely on real‐time object detection of the environment, which cannot tolerate the extra latency of sending data to the cloud, processing and then sending the results back to edge devices. Many studies aim to find innovative ways to reduce the size of deep learning models, the number of Floating‐point Operations per Second, and the time overhead of inference. Neural Architecture Search (NAS) makes it possible to automatically generate efficient neural network models. The authors summarise the existing NAS methods on resource‐constrained devices and categorise them according to single‐objective or multi‐objective optimisation. We review the search space, the search algorithm and the constraints of NAS on hardware devices. We also explore the challenges and open problems of hardware NAS. |
first_indexed | 2024-03-12T01:52:31Z |
format | Article |
id | doaj.art-a5c788d5c9484094b5075979f1abfd52 |
institution | Directory Open Access Journal |
issn | 2398-3396 |
language | English |
last_indexed | 2024-03-12T01:52:31Z |
publishDate | 2023-09-01 |
publisher | Wiley |
record_format | Article |
series | IET Cyber-Physical Systems |
spelling | doaj.art-a5c788d5c9484094b5075979f1abfd522023-09-08T09:04:19ZengWileyIET Cyber-Physical Systems2398-33962023-09-018314915910.1049/cps2.12058Neural architecture search for resource constrained hardware devices: A surveyYongjia Yang0Jinyu Zhan1Wei Jiang2Yucheng Jiang3Antai Yu4School of Information and Software Engineering University of Electronic Science and Technology of China Chengdu ChinaSchool of Information and Software Engineering University of Electronic Science and Technology of China Chengdu ChinaSchool of Information and Software Engineering University of Electronic Science and Technology of China Chengdu ChinaSchool of Information and Software Engineering University of Electronic Science and Technology of China Chengdu ChinaSchool of Information and Software Engineering University of Electronic Science and Technology of China Chengdu ChinaAbstract With the emergence of powerful and low‐energy Internet of Things devices, deep learning computing is increasingly applied to resource‐constrained edge devices. However, the mismatch between hardware devices with low computing capacity and the increasing complexity of Deep Neural Network models, as well as the growing real‐time requirements, bring challenges to the design and deployment of deep learning models. For example, autonomous driving technologies rely on real‐time object detection of the environment, which cannot tolerate the extra latency of sending data to the cloud, processing and then sending the results back to edge devices. Many studies aim to find innovative ways to reduce the size of deep learning models, the number of Floating‐point Operations per Second, and the time overhead of inference. Neural Architecture Search (NAS) makes it possible to automatically generate efficient neural network models. The authors summarise the existing NAS methods on resource‐constrained devices and categorise them according to single‐objective or multi‐objective optimisation. We review the search space, the search algorithm and the constraints of NAS on hardware devices. We also explore the challenges and open problems of hardware NAS.https://doi.org/10.1049/cps2.12058automata theoryautomationcomplex networkslearning (artificial intelligence)neural nets |
spellingShingle | Yongjia Yang Jinyu Zhan Wei Jiang Yucheng Jiang Antai Yu Neural architecture search for resource constrained hardware devices: A survey IET Cyber-Physical Systems automata theory automation complex networks learning (artificial intelligence) neural nets |
title | Neural architecture search for resource constrained hardware devices: A survey |
title_full | Neural architecture search for resource constrained hardware devices: A survey |
title_fullStr | Neural architecture search for resource constrained hardware devices: A survey |
title_full_unstemmed | Neural architecture search for resource constrained hardware devices: A survey |
title_short | Neural architecture search for resource constrained hardware devices: A survey |
title_sort | neural architecture search for resource constrained hardware devices a survey |
topic | automata theory automation complex networks learning (artificial intelligence) neural nets |
url | https://doi.org/10.1049/cps2.12058 |
work_keys_str_mv | AT yongjiayang neuralarchitecturesearchforresourceconstrainedhardwaredevicesasurvey AT jinyuzhan neuralarchitecturesearchforresourceconstrainedhardwaredevicesasurvey AT weijiang neuralarchitecturesearchforresourceconstrainedhardwaredevicesasurvey AT yuchengjiang neuralarchitecturesearchforresourceconstrainedhardwaredevicesasurvey AT antaiyu neuralarchitecturesearchforresourceconstrainedhardwaredevicesasurvey |