Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization
Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this bur...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10151890/ |
_version_ | 1797796544008683520 |
---|---|
author | Huruy Tesfai Hani Saleh Mahmoud Al-Qutayri Baker Mohammad Thanasios Stouraitis |
author_facet | Huruy Tesfai Hani Saleh Mahmoud Al-Qutayri Baker Mohammad Thanasios Stouraitis |
author_sort | Huruy Tesfai |
collection | DOAJ |
description | Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this burden is through low-bit quantization. However, the introduced quantization error causes a drop in the classification accuracy and requires design rethinking. To benefit from the hardware-friendly power-of-two (POT) and additive POT quantization, we explore various gradient estimation methods and propose quantization error-aware gradient estimation that manoeuvres weight update to be as close to the projection steps as possible. The clipping or scaling coefficients of the quantization scheme are learned jointly with the model parameters to minimize quantization error. We also apply per-channel quantization on POT and additive POT quantized models to minimize the accuracy degradation due to the rigid resolution property of POT quantization. We show that comparable accuracy can be achieved when using the proposed gradient estimation for POT quantization, even at ultra-low bit precision. |
first_indexed | 2024-03-13T03:34:36Z |
format | Article |
id | doaj.art-9a092d75d5a742f3a3f0bcb712ae42eb |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-13T03:34:36Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-9a092d75d5a742f3a3f0bcb712ae42eb2023-06-23T23:00:31ZengIEEEIEEE Access2169-35362023-01-0111612646127210.1109/ACCESS.2023.328629910151890Gradient Estimation for Ultra Low Precision POT and Additive POT QuantizationHuruy Tesfai0https://orcid.org/0000-0001-7108-641XHani Saleh1https://orcid.org/0000-0002-7185-0278Mahmoud Al-Qutayri2https://orcid.org/0000-0002-9600-8036Baker Mohammad3https://orcid.org/0000-0002-6063-473XThanasios Stouraitis4https://orcid.org/0000-0002-3696-4958Department of Electrical Engineering and Computer Science, System on Chip Center, Khalifa University, Abu Dhabi, United Arab EmiratesDepartment of Electrical Engineering and Computer Science, System on Chip Center, Khalifa University, Abu Dhabi, United Arab EmiratesDepartment of Electrical Engineering and Computer Science, System on Chip Center, Khalifa University, Abu Dhabi, United Arab EmiratesDepartment of Electrical Engineering and Computer Science, System on Chip Center, Khalifa University, Abu Dhabi, United Arab EmiratesDepartment of Electrical Engineering and Computer Science, System on Chip Center, Khalifa University, Abu Dhabi, United Arab EmiratesDeep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this burden is through low-bit quantization. However, the introduced quantization error causes a drop in the classification accuracy and requires design rethinking. To benefit from the hardware-friendly power-of-two (POT) and additive POT quantization, we explore various gradient estimation methods and propose quantization error-aware gradient estimation that manoeuvres weight update to be as close to the projection steps as possible. The clipping or scaling coefficients of the quantization scheme are learned jointly with the model parameters to minimize quantization error. We also apply per-channel quantization on POT and additive POT quantized models to minimize the accuracy degradation due to the rigid resolution property of POT quantization. We show that comparable accuracy can be achieved when using the proposed gradient estimation for POT quantization, even at ultra-low bit precision.https://ieeexplore.ieee.org/document/10151890/Deep neural networknon-uniform quantizationgradient estimation |
spellingShingle | Huruy Tesfai Hani Saleh Mahmoud Al-Qutayri Baker Mohammad Thanasios Stouraitis Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization IEEE Access Deep neural network non-uniform quantization gradient estimation |
title | Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization |
title_full | Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization |
title_fullStr | Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization |
title_full_unstemmed | Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization |
title_short | Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization |
title_sort | gradient estimation for ultra low precision pot and additive pot quantization |
topic | Deep neural network non-uniform quantization gradient estimation |
url | https://ieeexplore.ieee.org/document/10151890/ |
work_keys_str_mv | AT huruytesfai gradientestimationforultralowprecisionpotandadditivepotquantization AT hanisaleh gradientestimationforultralowprecisionpotandadditivepotquantization AT mahmoudalqutayri gradientestimationforultralowprecisionpotandadditivepotquantization AT bakermohammad gradientestimationforultralowprecisionpotandadditivepotquantization AT thanasiosstouraitis gradientestimationforultralowprecisionpotandadditivepotquantization |