Modification of SqueezeNet for Devices with Limited Computational Resources
In recent years, the computational approach has shifted from a statistical basis to deep neural network architectures which process the input without explicit knowledge that underlies the model. Many models with high accuracy have been proposed by training the datasets using high performance computi...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Ikatan Ahli Informatika Indonesia
2023-02-01
|
Series: | Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) |
Subjects: | |
Online Access: | http://jurnal.iaii.or.id/index.php/RESTI/article/view/4446 |
_version_ | 1797330861581926400 |
---|---|
author | Rahmadya Trias Handayanto Herlawati |
author_facet | Rahmadya Trias Handayanto Herlawati |
author_sort | Rahmadya Trias Handayanto |
collection | DOAJ |
description | In recent years, the computational approach has shifted from a statistical basis to deep neural network architectures which process the input without explicit knowledge that underlies the model. Many models with high accuracy have been proposed by training the datasets using high performance computing devices. However, only a few studies have examined its use on non-high-performance computers. In fact, most users, who are mostly researchers in certain fields (medical, geography, economics, etc.) sometimes need computers with limited computational resources to process datasets, from notebooks, personal computers, to mobile processor-based devices. This study proposes a basic model with good accuracy and can run lightly on the average computer so that it remains lightweight when used as a basis for advanced deep neural networks models, e.g., U-Net, SegNet, PSPNet, DeepLab, etc. Using several well-known basic methods as a baseline (SqueezeNet, ShuffleNet, GoogleNet, MobileNetV2, and ResNet), a model combining SqueezeNet with ResNet, termed Res-SqueezeNet, was formed. Testing results show that the proposed method has accuracy and inference time of 84.59% and 8.46 second, respectively, which has an accuracy of 2% higher than the SqueezeNet (82.53%) and is close to the accuracy of other baseline methods (from 84.93% to 0.88.01%) while still maintaining the inference speed (below nine second). In addition, residual part of the proposed method can be used to avoid vanishing gradient, hence, it can be implemented to solve more advanced problems which need a lot of layers, e.g., semantic segmentation, time-series prediction, etc. |
first_indexed | 2024-03-08T07:26:28Z |
format | Article |
id | doaj.art-aa088fcbddde4b648b8f90313f4a1bfc |
institution | Directory Open Access Journal |
issn | 2580-0760 |
language | English |
last_indexed | 2024-03-08T07:26:28Z |
publishDate | 2023-02-01 |
publisher | Ikatan Ahli Informatika Indonesia |
record_format | Article |
series | Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) |
spelling | doaj.art-aa088fcbddde4b648b8f90313f4a1bfc2024-02-02T21:47:01ZengIkatan Ahli Informatika IndonesiaJurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)2580-07602023-02-017115316010.29207/resti.v7i1.44464446Modification of SqueezeNet for Devices with Limited Computational ResourcesRahmadya Trias Handayanto0Herlawati1Universitas Islam 45 BekasiUniversitas Bhayangkara Jakarta RayaIn recent years, the computational approach has shifted from a statistical basis to deep neural network architectures which process the input without explicit knowledge that underlies the model. Many models with high accuracy have been proposed by training the datasets using high performance computing devices. However, only a few studies have examined its use on non-high-performance computers. In fact, most users, who are mostly researchers in certain fields (medical, geography, economics, etc.) sometimes need computers with limited computational resources to process datasets, from notebooks, personal computers, to mobile processor-based devices. This study proposes a basic model with good accuracy and can run lightly on the average computer so that it remains lightweight when used as a basis for advanced deep neural networks models, e.g., U-Net, SegNet, PSPNet, DeepLab, etc. Using several well-known basic methods as a baseline (SqueezeNet, ShuffleNet, GoogleNet, MobileNetV2, and ResNet), a model combining SqueezeNet with ResNet, termed Res-SqueezeNet, was formed. Testing results show that the proposed method has accuracy and inference time of 84.59% and 8.46 second, respectively, which has an accuracy of 2% higher than the SqueezeNet (82.53%) and is close to the accuracy of other baseline methods (from 84.93% to 0.88.01%) while still maintaining the inference speed (below nine second). In addition, residual part of the proposed method can be used to avoid vanishing gradient, hence, it can be implemented to solve more advanced problems which need a lot of layers, e.g., semantic segmentation, time-series prediction, etc.http://jurnal.iaii.or.id/index.php/RESTI/article/view/4446deep learning, squeezenet, resnet, imagenet, convolutional layer |
spellingShingle | Rahmadya Trias Handayanto Herlawati Modification of SqueezeNet for Devices with Limited Computational Resources Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) deep learning, squeezenet, resnet, imagenet, convolutional layer |
title | Modification of SqueezeNet for Devices with Limited Computational Resources |
title_full | Modification of SqueezeNet for Devices with Limited Computational Resources |
title_fullStr | Modification of SqueezeNet for Devices with Limited Computational Resources |
title_full_unstemmed | Modification of SqueezeNet for Devices with Limited Computational Resources |
title_short | Modification of SqueezeNet for Devices with Limited Computational Resources |
title_sort | modification of squeezenet for devices with limited computational resources |
topic | deep learning, squeezenet, resnet, imagenet, convolutional layer |
url | http://jurnal.iaii.or.id/index.php/RESTI/article/view/4446 |
work_keys_str_mv | AT rahmadyatriashandayanto modificationofsqueezenetfordeviceswithlimitedcomputationalresources AT herlawati modificationofsqueezenetfordeviceswithlimitedcomputationalresources |