Gradual Channel Pruning While Training Using Feature Relevance Scores for Convolutional Neural Networks

The enormous inference cost of deep neural networks can be mitigated by network compression. Pruning connections is one of the predominant approaches used for network compression. However, existing pruning techniques suffer from one or more of the following limitations: 1) They increase the time and...

Full description

Bibliographic Details
Main Authors: Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9199834/