Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention
Attention mechanisms can improve the performance of neural networks, but the recent attention networks bring a greater computational overhead while improving network performance. How to maintain model performance while reducing complexity is a hot research topic. In this paper, a lightweight Mixture...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-08-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/24/9/1180 |
_version_ | 1797488711102889984 |
---|---|
author | Hua Yang Ming Yang Bitao He Tao Qin Jing Yang |
author_facet | Hua Yang Ming Yang Bitao He Tao Qin Jing Yang |
author_sort | Hua Yang |
collection | DOAJ |
description | Attention mechanisms can improve the performance of neural networks, but the recent attention networks bring a greater computational overhead while improving network performance. How to maintain model performance while reducing complexity is a hot research topic. In this paper, a lightweight Mixture Attention (MA) module is proposed to improve network performance and reduce the complexity of the model. Firstly, the MA module uses multi-branch architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Secondly, in order to reduce the number of parameters, each branch uses group convolution independently, and the feature maps extracted by different branches are fused along the channel dimension. Finally, the fused feature maps are processed using the channel attention module to extract statistical information on the channels. The proposed method is efficient yet effective, e.g., the network parameters and computational cost are reduced by 9.86% and 7.83%, respectively, and the Top-1 performance is improved by 1.99% compared with ResNet50. Experimental results on common-used benchmarks, including CIFAR-10 for classification and PASCAL-VOC for object detection, demonstrate that the proposed MA outperforms the current SOTA methods significantly by achieving higher accuracy while having lower model complexity. |
first_indexed | 2024-03-10T00:06:13Z |
format | Article |
id | doaj.art-5f7b4a2ce92543278e58ec738349fe76 |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-10T00:06:13Z |
publishDate | 2022-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-5f7b4a2ce92543278e58ec738349fe762023-11-23T16:07:19ZengMDPI AGEntropy1099-43002022-08-01249118010.3390/e24091180Multiscale Hybrid Convolutional Deep Neural Networks with Channel AttentionHua Yang0Ming Yang1Bitao He2Tao Qin3Jing Yang4Electrical Engineering College, Guizhou University, Guiyang 550025, ChinaElectrical Engineering College, Guizhou University, Guiyang 550025, ChinaPower China Guizhou Engineering Co., Ltd., Guiyang 550001, ChinaElectrical Engineering College, Guizhou University, Guiyang 550025, ChinaElectrical Engineering College, Guizhou University, Guiyang 550025, ChinaAttention mechanisms can improve the performance of neural networks, but the recent attention networks bring a greater computational overhead while improving network performance. How to maintain model performance while reducing complexity is a hot research topic. In this paper, a lightweight Mixture Attention (MA) module is proposed to improve network performance and reduce the complexity of the model. Firstly, the MA module uses multi-branch architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Secondly, in order to reduce the number of parameters, each branch uses group convolution independently, and the feature maps extracted by different branches are fused along the channel dimension. Finally, the fused feature maps are processed using the channel attention module to extract statistical information on the channels. The proposed method is efficient yet effective, e.g., the network parameters and computational cost are reduced by 9.86% and 7.83%, respectively, and the Top-1 performance is improved by 1.99% compared with ResNet50. Experimental results on common-used benchmarks, including CIFAR-10 for classification and PASCAL-VOC for object detection, demonstrate that the proposed MA outperforms the current SOTA methods significantly by achieving higher accuracy while having lower model complexity.https://www.mdpi.com/1099-4300/24/9/1180convolutional neural networksfeature fusionpyramid architecturechannel attentionskip connection |
spellingShingle | Hua Yang Ming Yang Bitao He Tao Qin Jing Yang Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention Entropy convolutional neural networks feature fusion pyramid architecture channel attention skip connection |
title | Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention |
title_full | Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention |
title_fullStr | Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention |
title_full_unstemmed | Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention |
title_short | Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention |
title_sort | multiscale hybrid convolutional deep neural networks with channel attention |
topic | convolutional neural networks feature fusion pyramid architecture channel attention skip connection |
url | https://www.mdpi.com/1099-4300/24/9/1180 |
work_keys_str_mv | AT huayang multiscalehybridconvolutionaldeepneuralnetworkswithchannelattention AT mingyang multiscalehybridconvolutionaldeepneuralnetworkswithchannelattention AT bitaohe multiscalehybridconvolutionaldeepneuralnetworkswithchannelattention AT taoqin multiscalehybridconvolutionaldeepneuralnetworkswithchannelattention AT jingyang multiscalehybridconvolutionaldeepneuralnetworkswithchannelattention |