Neural network pruning based on channel attention mechanism

Network pruning facilitates the deployment of convolutional neural networks in resource-limited environments by reducing redundant parameters. However, most of the existing methods ignore the differences in the contributions of the output feature maps. In response to the above, we propose a novel ne...

Full description

Bibliographic Details
Main Authors: Jianqiang Hu, Yang Liu, Keshou Wu
Format: Article
Language:English
Published: Taylor & Francis Group 2022-12-01
Series:Connection Science
Subjects:
Online Access:http://dx.doi.org/10.1080/09540091.2022.2111405
Description
Summary:Network pruning facilitates the deployment of convolutional neural networks in resource-limited environments by reducing redundant parameters. However, most of the existing methods ignore the differences in the contributions of the output feature maps. In response to the above, we propose a novel neural network pruning method based on the channel attention mechanism. In this paper, we firstly utilise the principal component analysis algorithm to reduce the influence of noisy data on feature maps. Then, we propose an improved Leaky-Squeeze-and-Excitation block to evaluate the contribution of each output feature map using the channel attention mechanism. Finally, we effectively remove lower contribution channels without reducing the model performance as much as possible. Extensive experimental results show that our proposed method achieves significant improvements over the state-of-the-art in terms of FLOPs and parameters reduction with similar accuracies. For example, with VGG-16-baseline, our proposed method reduces parameters by 83.3% and FLOPs by 66.3%, with only a loss of 0.13% in top-5 accuracy. Furthermore, it effectively balances pruning efficiency and prediction accuracy.
ISSN:0954-0091
1360-0494