L-GhostNet: Extract Better Quality Features
A lightweight image recognition model, L-GhostNet based on improved GhostNet, is proposed to address the problems of extensive computation and high storage cost of deep convolutional neural networks. The model incorporated learning group convolution and improved CA into GhostNet to reduce the calcul...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10005301/ |
_version_ | 1828067634135957504 |
---|---|
author | Jing Chi Shaohua Guo Haopeng Zhang Yu Shan |
author_facet | Jing Chi Shaohua Guo Haopeng Zhang Yu Shan |
author_sort | Jing Chi |
collection | DOAJ |
description | A lightweight image recognition model, L-GhostNet based on improved GhostNet, is proposed to address the problems of extensive computation and high storage cost of deep convolutional neural networks. The model incorporated learning group convolution and improved CA into GhostNet to reduce the calculation and number of parameters and improve the flexibility of the network. At the same time, the pruning ratio in the learning group convolution is increased to control the end time of pruning in the whole process; the improved CA uses a fully connected layer to replace the convolutional layer, which can make the connection between the two dimensions tighter and increase the flexibility of the model. Experiments on datasets in various fields, such as grape leaf recognition, gesture recognition, face recognition, rice recognition, and CIFAR-10, show that L-GhostNet has slightly improved accuracy, reduced computation by more than 44%, decreased the number of parameters by more than 33%, and improved FPS by 26% on all datasets compared to GhostNet. Compared with other commonly used lightweight network models, MobileNets and ShuffleNets, it has the best overall performance with the lowest FLOPs, highest accuracy, and fewer parameters on all datasets at the same level of FLOPs. |
first_indexed | 2024-04-10T23:48:36Z |
format | Article |
id | doaj.art-e06b0f4d9624474aad8a056224e305d5 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-04-10T23:48:36Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-e06b0f4d9624474aad8a056224e305d52023-01-11T00:00:40ZengIEEEIEEE Access2169-35362023-01-01112361237410.1109/ACCESS.2023.323410810005301L-GhostNet: Extract Better Quality FeaturesJing Chi0https://orcid.org/0000-0001-8872-9987Shaohua Guo1Haopeng Zhang2Yu Shan3School of Information and Electrical Engineering, Hebei University of Engineering, Hebei, ChinaSchool of Information and Electrical Engineering, Hebei University of Engineering, Hebei, ChinaSchool of Information and Electrical Engineering, Hebei University of Engineering, Hebei, ChinaSchool of Information and Electrical Engineering, Hebei University of Engineering, Hebei, ChinaA lightweight image recognition model, L-GhostNet based on improved GhostNet, is proposed to address the problems of extensive computation and high storage cost of deep convolutional neural networks. The model incorporated learning group convolution and improved CA into GhostNet to reduce the calculation and number of parameters and improve the flexibility of the network. At the same time, the pruning ratio in the learning group convolution is increased to control the end time of pruning in the whole process; the improved CA uses a fully connected layer to replace the convolutional layer, which can make the connection between the two dimensions tighter and increase the flexibility of the model. Experiments on datasets in various fields, such as grape leaf recognition, gesture recognition, face recognition, rice recognition, and CIFAR-10, show that L-GhostNet has slightly improved accuracy, reduced computation by more than 44%, decreased the number of parameters by more than 33%, and improved FPS by 26% on all datasets compared to GhostNet. Compared with other commonly used lightweight network models, MobileNets and ShuffleNets, it has the best overall performance with the lowest FLOPs, highest accuracy, and fewer parameters on all datasets at the same level of FLOPs.https://ieeexplore.ieee.org/document/10005301/Coordinate attentionghostNetgroup convolutionlightweight convolutional neural network |
spellingShingle | Jing Chi Shaohua Guo Haopeng Zhang Yu Shan L-GhostNet: Extract Better Quality Features IEEE Access Coordinate attention ghostNet group convolution lightweight convolutional neural network |
title | L-GhostNet: Extract Better Quality Features |
title_full | L-GhostNet: Extract Better Quality Features |
title_fullStr | L-GhostNet: Extract Better Quality Features |
title_full_unstemmed | L-GhostNet: Extract Better Quality Features |
title_short | L-GhostNet: Extract Better Quality Features |
title_sort | l ghostnet extract better quality features |
topic | Coordinate attention ghostNet group convolution lightweight convolutional neural network |
url | https://ieeexplore.ieee.org/document/10005301/ |
work_keys_str_mv | AT jingchi lghostnetextractbetterqualityfeatures AT shaohuaguo lghostnetextractbetterqualityfeatures AT haopengzhang lghostnetextractbetterqualityfeatures AT yushan lghostnetextractbetterqualityfeatures |