Neuron-by-Neuron Quantization for Efficient Low-Bit QNN Training
Quantized neural networks (QNNs) are widely used to achieve computationally efficient solutions to recognition problems. Overall, eight-bit QNNs have almost the same accuracy as full-precision networks, but working several times faster. However, the networks with lower quantization levels demonstrat...
Main Authors: | Artem Sher, Anton Trusov, Elena Limonova, Dmitry Nikolaev, Vladimir V. Arlazarov |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-04-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/11/9/2112 |
Similar Items
-
4.6-Bit Quantization for Fast and Accurate Neural Network Inference on CPUs
by: Anton Trusov, et al.
Published: (2024-02-01) -
Joint high-dimensional soft bit estimation and quantization using deep learning
by: Marius Arvinte, et al.
Published: (2022-06-01) -
Latitude-Adaptive Integer Bit Allocation for Quantization of Omnidirectional Images
by: Qian Sima, et al.
Published: (2024-02-01) -
Two Novel Non-Uniform Quantizers with Application in Post-Training Quantization
by: Zoran Perić, et al.
Published: (2022-09-01) -
Nomograms for comparing the corrective abilities of binary and ternary neurons used in multicriteria testing of the hypothesis of small sample data independence
by: V.I. Volchikhin, et al.
Published: (2023-01-01)