4.6-Bit Quantization for Fast and Accurate Neural Network Inference on CPUs

Quantization is a widespread method for reducing the inference time of neural networks on mobile Central Processing Units (CPUs). Eight-bit quantized networks demonstrate similarly high quality as full precision models and perfectly fit the hardware architecture with one-byte coefficients and thirty...

Full description

Bibliographic Details
Main Authors: Anton Trusov, Elena Limonova, Dmitry Nikolaev, Vladimir V. Arlazarov
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/12/5/651