Approximating continuous convolutions for deep network compression
We present ApproxConv, a novel method for compressing the layers of a convolutional neural network. Reframing conventional discrete convolution as continuous convolution of parametrised functions over space, we use functional approximations to capture the essential structures of CNN filters with few...
Main Authors: | , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
British Machine Vision Association
2022
|
_version_ | 1826311851245305856 |
---|---|
author | Costain, TW Prisacariu, VA |
author_facet | Costain, TW Prisacariu, VA |
author_sort | Costain, TW |
collection | OXFORD |
description | We present ApproxConv, a novel method for compressing the layers of a convolutional neural network. Reframing conventional discrete convolution as continuous convolution of parametrised functions over space, we use functional approximations to capture the essential structures of CNN filters with fewer parameters than conventional operations. Our method is able to reduce the size of trained CNN layers requiring only a small amount of fine-tuning. We show that our method is able to compress existing deep network models by half whilst losing only 1.86% accuracy. Further, we demonstrate that our method is compatible with other compression methods like quantisation allowing for further reductions in model size. |
first_indexed | 2024-03-07T08:17:23Z |
format | Conference item |
id | oxford-uuid:93aa1aae-f6f1-455b-a30a-6632ed59bf49 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T08:17:23Z |
publishDate | 2022 |
publisher | British Machine Vision Association |
record_format | dspace |
spelling | oxford-uuid:93aa1aae-f6f1-455b-a30a-6632ed59bf492024-01-11T15:28:03ZApproximating continuous convolutions for deep network compressionConference itemhttp://purl.org/coar/resource_type/c_5794uuid:93aa1aae-f6f1-455b-a30a-6632ed59bf49EnglishSymplectic ElementsBritish Machine Vision Association2022Costain, TWPrisacariu, VAWe present ApproxConv, a novel method for compressing the layers of a convolutional neural network. Reframing conventional discrete convolution as continuous convolution of parametrised functions over space, we use functional approximations to capture the essential structures of CNN filters with fewer parameters than conventional operations. Our method is able to reduce the size of trained CNN layers requiring only a small amount of fine-tuning. We show that our method is able to compress existing deep network models by half whilst losing only 1.86% accuracy. Further, we demonstrate that our method is compatible with other compression methods like quantisation allowing for further reductions in model size. |
spellingShingle | Costain, TW Prisacariu, VA Approximating continuous convolutions for deep network compression |
title | Approximating continuous convolutions for deep network compression |
title_full | Approximating continuous convolutions for deep network compression |
title_fullStr | Approximating continuous convolutions for deep network compression |
title_full_unstemmed | Approximating continuous convolutions for deep network compression |
title_short | Approximating continuous convolutions for deep network compression |
title_sort | approximating continuous convolutions for deep network compression |
work_keys_str_mv | AT costaintw approximatingcontinuousconvolutionsfordeepnetworkcompression AT prisacariuva approximatingcontinuousconvolutionsfordeepnetworkcompression |