Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization

Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this bur...

Full description

Bibliographic Details
Main Authors: Huruy Tesfai, Hani Saleh, Mahmoud Al-Qutayri, Baker Mohammad, Thanasios Stouraitis
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10151890/
Description
Summary:Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this burden is through low-bit quantization. However, the introduced quantization error causes a drop in the classification accuracy and requires design rethinking. To benefit from the hardware-friendly power-of-two (POT) and additive POT quantization, we explore various gradient estimation methods and propose quantization error-aware gradient estimation that manoeuvres weight update to be as close to the projection steps as possible. The clipping or scaling coefficients of the quantization scheme are learned jointly with the model parameters to minimize quantization error. We also apply per-channel quantization on POT and additive POT quantized models to minimize the accuracy degradation due to the rigid resolution property of POT quantization. We show that comparable accuracy can be achieved when using the proposed gradient estimation for POT quantization, even at ultra-low bit precision.
ISSN:2169-3536