Data-Aware Adaptive Pruning Model Compression Algorithm Based on a Group Attention Mechanism and Reinforcement Learning
The success of convolutional neural networks (CNNs) benefits from the stacking of convolutional layers, which improves the model’s receptive field for image data but also causes a decrease in inference speed. To improve the inference speed of large convolutional network models without sac...
Main Authors: | Zhi Yang, Yuan Zhai, Yi Xiang, Jianquan Wu, Jinliang Shi, Ying Wu |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9813741/ |
Similar Items
-
A Novel Channel Pruning Compression Algorithm Combined with an Attention Mechanism
by: Ming Zhao, et al.
Published: (2023-04-01) -
Filter Pruning via Attention Consistency on Feature Maps
by: Huoxiang Yang, et al.
Published: (2023-02-01) -
Pruning With Scaled Policy Constraints for Light-Weight Reinforcement Learning
by: Seongmin Park, et al.
Published: (2024-01-01) -
Model Compression Algorithm via Reinforcement Learning and Knowledge Distillation
by: Botao Liu, et al.
Published: (2023-11-01) -
Comments and Corrections Correction to “Data-Aware Adaptive Pruning Model Compression Algorithm Based on a Group Attention Mechanism and Reinforcement Learning”
by: Zhi Yang, et al.
Published: (2022-01-01)