Compression of Deep-Learning Models Through Global Weight Pruning Using Alternating Direction Method of Multipliers
Abstract Deep learning has shown excellent performance in numerous machine-learning tasks, but one practical obstacle in deep learning is that the amount of computation and required memory is huge. Model compression, especially in deep learning, is very useful because it saves memory and reduces sto...
Main Authors: | Kichun Lee, Sunghun Hwangbo, Dongwook Yang, Geonseok Lee |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2023-02-01
|
Series: | International Journal of Computational Intelligence Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s44196-023-00202-z |
Similar Items
-
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
by: Tao Wu, et al.
Published: (2021-01-01) -
Compression of Deep Convolutional Neural Network Using Additional Importance-Weight-Based Filter Pruning Approach
by: Shrutika S. Sawant, et al.
Published: (2022-11-01) -
Pruning Policy for Image Classification Problems Based on Deep Learning
by: Cesar G. Pachon, et al.
Published: (2024-09-01) -
Weight pruning-UNet: Weight pruning UNet with depth-wise separable convolutions for semantic segmentation of kidney tumors
by: Patike Kiran Rao, et al.
Published: (2022-01-01) -
Compressing Convolutional Neural Networks by Pruning Density Peak Filters
by: Yunseok Jang, et al.
Published: (2021-01-01)