O-2A: Outlier-Aware Compression for 8-bit Post-Training Quantization Model

Post Training Quantization (PTQ) is a practical and cost-effective technique that reduces main memory footprint of Deep Neural Networks (DNNs). However, the effectiveness of PTQ is limited by a notable decrease in accuracy when the precision falls below 8 bits. To overcome this limitation of PTQ, we...

Full description

Bibliographic Details
Main Authors: Nguyen-Dong Ho, Ik-Joon Chang
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10237192/