HAQ: Hardware-Aware Automated Quantization With Mixed Precision

Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for...

Повний опис

Бібліографічні деталі
Автори: Wang, Kuan, Liu, Zhijian, Lin, Yujun, Lin, Ji, Han, Song
Інші автори: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Формат: Стаття
Мова:English
Опубліковано: Institute of Electrical and Electronics Engineers (IEEE) 2021
Онлайн доступ:https://hdl.handle.net/1721.1/129522
_version_ 1826213903127805952
author Wang, Kuan
Liu, Zhijian
Lin, Yujun
Lin, Ji
Han, Song
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Wang, Kuan
Liu, Zhijian
Lin, Yujun
Lin, Ji
Han, Song
author_sort Wang, Kuan
collection MIT
description Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, energy, and model size, which is both time-consuming and sub-optimal. There are plenty of specialized hardware for neural networks, but little research has been done for specialized neural network optimization for a particular hardware architecture. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals (latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.
first_indexed 2024-09-23T15:56:40Z
format Article
id mit-1721.1/129522
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T15:56:40Z
publishDate 2021
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1295222022-10-02T05:16:13Z HAQ: Hardware-Aware Automated Quantization With Mixed Precision Wang, Kuan Liu, Zhijian Lin, Yujun Lin, Ji Han, Song Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, energy, and model size, which is both time-consuming and sub-optimal. There are plenty of specialized hardware for neural networks, but little research has been done for specialized neural network optimization for a particular hardware architecture. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals (latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design. 2021-01-22T13:26:59Z 2021-01-22T13:26:59Z 2019-06 2020-12-17T16:02:59Z Article http://purl.org/eprint/type/ConferencePaper 9781728132938 9781728132945 https://hdl.handle.net/1721.1/129522 Wang, Kuan et al. “HAQ: Hardware-Aware Automated Quantization With Mixed Precision.” Paper in the Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach CA, 16-20 June 2019, IEEE © 2019 The Author(s) en 10.1109/CVPR.2019.00881 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv
spellingShingle Wang, Kuan
Liu, Zhijian
Lin, Yujun
Lin, Ji
Han, Song
HAQ: Hardware-Aware Automated Quantization With Mixed Precision
title HAQ: Hardware-Aware Automated Quantization With Mixed Precision
title_full HAQ: Hardware-Aware Automated Quantization With Mixed Precision
title_fullStr HAQ: Hardware-Aware Automated Quantization With Mixed Precision
title_full_unstemmed HAQ: Hardware-Aware Automated Quantization With Mixed Precision
title_short HAQ: Hardware-Aware Automated Quantization With Mixed Precision
title_sort haq hardware aware automated quantization with mixed precision
url https://hdl.handle.net/1721.1/129522
work_keys_str_mv AT wangkuan haqhardwareawareautomatedquantizationwithmixedprecision
AT liuzhijian haqhardwareawareautomatedquantizationwithmixedprecision
AT linyujun haqhardwareawareautomatedquantizationwithmixedprecision
AT linji haqhardwareawareautomatedquantizationwithmixedprecision
AT hansong haqhardwareawareautomatedquantizationwithmixedprecision