Accelerating Super-Resolution Network Inference via Sensitivity-Based Weight Sparsity Allocation

Weight sparsification has been extensively studied in image classification and object detection to accelerate network inference. However, for image generation tasks, such as image super-resolution, forcing some weights to zeros is a non-trivial task that typically causes significant degradation in r...

Full description

Bibliographic Details
Main Authors: Tuan Nghia Nguyen, Xuan Truong Nguyen, Kyujoong Lee, Hyuk-Jae Lee
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10298064/
Description
Summary:Weight sparsification has been extensively studied in image classification and object detection to accelerate network inference. However, for image generation tasks, such as image super-resolution, forcing some weights to zeros is a non-trivial task that typically causes significant degradation in restoration quality, that is, peak signal-to-noise (PSNR). In this study, we first introduce a sensitivity metric to measure PSNR degradation for layer-wise sparsity changes and observe that the sensitivities vary significantly across network layers. We demonstrate that a uniform sparsity allocation method generally causes a non-negligible decrease in accuracy, that is, approximately 0.17 dB, for 65% of the zero weights. In addition, finding an optimal solution to the sparsity allocation problem is not feasible because the design space is exponential with respect to the number of weights and layers. To address this problem, we proposed a simple yet effective sparsity allocation method based on layer-wise sensitivity. The experimental results demonstrate that the proposed method achieves up to 35% computation reduction with an average accuracy drop of 0.02 dB varying between 0.01 to 0.04 dB across the well-known datasets Set5, Set14, B100, and Urban100. Moreover, when integrated with the activation sparsity SMSR, the proposed method reduced the computation by 46% on average.
ISSN:2169-3536