Kernel-wise difference minimization for convolutional neural network compression in metaverse
Convolutional neural networks have achieved remarkable success in computer vision research. However, to further improve their performance, network models have become increasingly complex and require more memory and computational resources. As a result, model compression has become an essential area...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-08-01
|
Series: | Frontiers in Big Data |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fdata.2023.1200382/full |
_version_ | 1797754537358917632 |
---|---|
author | Yi-Ting Chang |
author_facet | Yi-Ting Chang |
author_sort | Yi-Ting Chang |
collection | DOAJ |
description | Convolutional neural networks have achieved remarkable success in computer vision research. However, to further improve their performance, network models have become increasingly complex and require more memory and computational resources. As a result, model compression has become an essential area of research in recent years. In this study, we focus on the best-case scenario for Huffman coding, which involves data with lower entropy. Building on this concept, we formulate a compression with a filter-wise difference minimization problem and propose a novel algorithm to solve it. Our approach involves filter-level pruning, followed by minimizing the difference between filters. Additionally, we perform filter permutation to further enhance compression. Our proposed algorithm achieves a compression rate of 94× on Lenet-5 and 50× on VGG16. The results demonstrate the effectiveness of our method in significantly reducing the size of deep neural networks while maintaining a high level of accuracy. We believe that our approach holds great promise in advancing the field of model compression and can benefit various applications that require efficient neural network models. Overall, this study provides important insights and contributions toward addressing the challenges of model compression in deep neural networks. |
first_indexed | 2024-03-12T17:34:50Z |
format | Article |
id | doaj.art-ab632507503d4c2c8d66bbb447b4f6c4 |
institution | Directory Open Access Journal |
issn | 2624-909X |
language | English |
last_indexed | 2024-03-12T17:34:50Z |
publishDate | 2023-08-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Big Data |
spelling | doaj.art-ab632507503d4c2c8d66bbb447b4f6c42023-08-04T12:16:36ZengFrontiers Media S.A.Frontiers in Big Data2624-909X2023-08-01610.3389/fdata.2023.12003821200382Kernel-wise difference minimization for convolutional neural network compression in metaverseYi-Ting ChangConvolutional neural networks have achieved remarkable success in computer vision research. However, to further improve their performance, network models have become increasingly complex and require more memory and computational resources. As a result, model compression has become an essential area of research in recent years. In this study, we focus on the best-case scenario for Huffman coding, which involves data with lower entropy. Building on this concept, we formulate a compression with a filter-wise difference minimization problem and propose a novel algorithm to solve it. Our approach involves filter-level pruning, followed by minimizing the difference between filters. Additionally, we perform filter permutation to further enhance compression. Our proposed algorithm achieves a compression rate of 94× on Lenet-5 and 50× on VGG16. The results demonstrate the effectiveness of our method in significantly reducing the size of deep neural networks while maintaining a high level of accuracy. We believe that our approach holds great promise in advancing the field of model compression and can benefit various applications that require efficient neural network models. Overall, this study provides important insights and contributions toward addressing the challenges of model compression in deep neural networks.https://www.frontiersin.org/articles/10.3389/fdata.2023.1200382/fullmetaversecomputer visionHuffman codingfilter-level pruningCNN |
spellingShingle | Yi-Ting Chang Kernel-wise difference minimization for convolutional neural network compression in metaverse Frontiers in Big Data metaverse computer vision Huffman coding filter-level pruning CNN |
title | Kernel-wise difference minimization for convolutional neural network compression in metaverse |
title_full | Kernel-wise difference minimization for convolutional neural network compression in metaverse |
title_fullStr | Kernel-wise difference minimization for convolutional neural network compression in metaverse |
title_full_unstemmed | Kernel-wise difference minimization for convolutional neural network compression in metaverse |
title_short | Kernel-wise difference minimization for convolutional neural network compression in metaverse |
title_sort | kernel wise difference minimization for convolutional neural network compression in metaverse |
topic | metaverse computer vision Huffman coding filter-level pruning CNN |
url | https://www.frontiersin.org/articles/10.3389/fdata.2023.1200382/full |
work_keys_str_mv | AT yitingchang kernelwisedifferenceminimizationforconvolutionalneuralnetworkcompressioninmetaverse |