Training-free neural architecture search: A review
The goal of neural architecture search (NAS) is to either downsize the neural architecture and model of a deep neural network (DNN), adjust a neural architecture to improve its end result, or even speed up the whole training process. Such improvements make it possible to generate or install the mode...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2024-02-01
|
Series: | ICT Express |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2405959523001443 |
_version_ | 1797305240674893824 |
---|---|
author | Meng-Ting Wu Chun-Wei Tsai |
author_facet | Meng-Ting Wu Chun-Wei Tsai |
author_sort | Meng-Ting Wu |
collection | DOAJ |
description | The goal of neural architecture search (NAS) is to either downsize the neural architecture and model of a deep neural network (DNN), adjust a neural architecture to improve its end result, or even speed up the whole training process. Such improvements make it possible to generate or install the model of a DNN on a small device, such as a device of internet of things or wireless sensor network. Because most NAS algorithms are time-consuming, finding out a way to reduce their computation costs has now become a critical research issue. The training-free method (also called the zero-shot learning) provides an alternative way to estimate how good a neural architecture is more efficiently during the process of NAS by using a lightweight score function instead of a general training process to avoid incurring heavy costs. This paper starts with a brief discussion of DNN and NAS, followed by a brief review of both model-dependent and model-independent training-free score functions. A brief introduction to the search algorithms and benchmarks that were widely used in a training-free NAS will also be given in this paper. The changes, potential, open issues, and future trends of this research topic are then addressed in the end of this paper. |
first_indexed | 2024-03-08T00:23:04Z |
format | Article |
id | doaj.art-98e34dc41fc944bcaa82a0e528a94f4d |
institution | Directory Open Access Journal |
issn | 2405-9595 |
language | English |
last_indexed | 2024-03-08T00:23:04Z |
publishDate | 2024-02-01 |
publisher | Elsevier |
record_format | Article |
series | ICT Express |
spelling | doaj.art-98e34dc41fc944bcaa82a0e528a94f4d2024-02-16T04:29:48ZengElsevierICT Express2405-95952024-02-01101213231Training-free neural architecture search: A reviewMeng-Ting Wu0Chun-Wei Tsai1Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, TaiwanCorresponding author.; Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, TaiwanThe goal of neural architecture search (NAS) is to either downsize the neural architecture and model of a deep neural network (DNN), adjust a neural architecture to improve its end result, or even speed up the whole training process. Such improvements make it possible to generate or install the model of a DNN on a small device, such as a device of internet of things or wireless sensor network. Because most NAS algorithms are time-consuming, finding out a way to reduce their computation costs has now become a critical research issue. The training-free method (also called the zero-shot learning) provides an alternative way to estimate how good a neural architecture is more efficiently during the process of NAS by using a lightweight score function instead of a general training process to avoid incurring heavy costs. This paper starts with a brief discussion of DNN and NAS, followed by a brief review of both model-dependent and model-independent training-free score functions. A brief introduction to the search algorithms and benchmarks that were widely used in a training-free NAS will also be given in this paper. The changes, potential, open issues, and future trends of this research topic are then addressed in the end of this paper.http://www.sciencedirect.com/science/article/pii/S2405959523001443Neural architecture searchDeep neural networkTraining-freeZero-shotInternet of things |
spellingShingle | Meng-Ting Wu Chun-Wei Tsai Training-free neural architecture search: A review ICT Express Neural architecture search Deep neural network Training-free Zero-shot Internet of things |
title | Training-free neural architecture search: A review |
title_full | Training-free neural architecture search: A review |
title_fullStr | Training-free neural architecture search: A review |
title_full_unstemmed | Training-free neural architecture search: A review |
title_short | Training-free neural architecture search: A review |
title_sort | training free neural architecture search a review |
topic | Neural architecture search Deep neural network Training-free Zero-shot Internet of things |
url | http://www.sciencedirect.com/science/article/pii/S2405959523001443 |
work_keys_str_mv | AT mengtingwu trainingfreeneuralarchitecturesearchareview AT chunweitsai trainingfreeneuralarchitecturesearchareview |