Advancements in On-Device Deep Neural Networks
In recent years, rapid advancements in both hardware and software technologies have resulted in the ability to execute artificial intelligence (AI) algorithms on low-resource devices. The combination of high-speed, low-power electronic hardware and efficient AI algorithms is driving the emergence of...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/14/8/470 |
_version_ | 1797584370849021952 |
---|---|
author | Kavya Saravanan Abbas Z. Kouzani |
author_facet | Kavya Saravanan Abbas Z. Kouzani |
author_sort | Kavya Saravanan |
collection | DOAJ |
description | In recent years, rapid advancements in both hardware and software technologies have resulted in the ability to execute artificial intelligence (AI) algorithms on low-resource devices. The combination of high-speed, low-power electronic hardware and efficient AI algorithms is driving the emergence of on-device AI. Deep neural networks (DNNs) are highly effective AI algorithms used for identifying patterns in complex data. DNNs, however, contain many parameters and operations that make them computationally intensive to execute. Accordingly, DNNs are usually executed on high-resource backend processors. This causes an increase in data processing latency and energy expenditure. Therefore, modern strategies are being developed to facilitate the implementation of DNNs on devices with limited resources. This paper presents a detailed review of the current methods and structures that have been developed to deploy DNNs on devices with limited resources. Firstly, an overview of DNNs is presented. Next, the methods used to implement DNNs on resource-constrained devices are explained. Following this, the existing works reported in the literature on the execution of DNNs on low-resource devices are reviewed. The reviewed works are classified into three categories: software, hardware, and hardware/software co-design. Then, a discussion on the reviewed approaches is given, followed by a list of challenges and future prospects of on-device AI, together with its emerging applications. |
first_indexed | 2024-03-10T23:51:55Z |
format | Article |
id | doaj.art-eadebf6a5b404405b4aed797c74fd0b5 |
institution | Directory Open Access Journal |
issn | 2078-2489 |
language | English |
last_indexed | 2024-03-10T23:51:55Z |
publishDate | 2023-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj.art-eadebf6a5b404405b4aed797c74fd0b52023-11-19T01:35:13ZengMDPI AGInformation2078-24892023-08-0114847010.3390/info14080470Advancements in On-Device Deep Neural NetworksKavya Saravanan0Abbas Z. Kouzani1School of Engineering, Deakin University, Geelong, VIC 3216, AustraliaSchool of Engineering, Deakin University, Geelong, VIC 3216, AustraliaIn recent years, rapid advancements in both hardware and software technologies have resulted in the ability to execute artificial intelligence (AI) algorithms on low-resource devices. The combination of high-speed, low-power electronic hardware and efficient AI algorithms is driving the emergence of on-device AI. Deep neural networks (DNNs) are highly effective AI algorithms used for identifying patterns in complex data. DNNs, however, contain many parameters and operations that make them computationally intensive to execute. Accordingly, DNNs are usually executed on high-resource backend processors. This causes an increase in data processing latency and energy expenditure. Therefore, modern strategies are being developed to facilitate the implementation of DNNs on devices with limited resources. This paper presents a detailed review of the current methods and structures that have been developed to deploy DNNs on devices with limited resources. Firstly, an overview of DNNs is presented. Next, the methods used to implement DNNs on resource-constrained devices are explained. Following this, the existing works reported in the literature on the execution of DNNs on low-resource devices are reviewed. The reviewed works are classified into three categories: software, hardware, and hardware/software co-design. Then, a discussion on the reviewed approaches is given, followed by a list of challenges and future prospects of on-device AI, together with its emerging applications.https://www.mdpi.com/2078-2489/14/8/470artificial intelligencedeep neural networksresource-constrained deviceson-device AI |
spellingShingle | Kavya Saravanan Abbas Z. Kouzani Advancements in On-Device Deep Neural Networks Information artificial intelligence deep neural networks resource-constrained devices on-device AI |
title | Advancements in On-Device Deep Neural Networks |
title_full | Advancements in On-Device Deep Neural Networks |
title_fullStr | Advancements in On-Device Deep Neural Networks |
title_full_unstemmed | Advancements in On-Device Deep Neural Networks |
title_short | Advancements in On-Device Deep Neural Networks |
title_sort | advancements in on device deep neural networks |
topic | artificial intelligence deep neural networks resource-constrained devices on-device AI |
url | https://www.mdpi.com/2078-2489/14/8/470 |
work_keys_str_mv | AT kavyasaravanan advancementsinondevicedeepneuralnetworks AT abbaszkouzani advancementsinondevicedeepneuralnetworks |