Proximal mean-field for neural network quantization
Compressing large Neural Networks (NN) by quantizing the parameters, while maintaining the performance is highly desirable due to reduced memory and time complexity. In this work, we cast NN quantization as a discrete labelling problem, and by examining relaxations, we design an efficient iterative...
Main Authors: | Ajanthan, T, Dokania, P, Hartley, R, Torr, P |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2020
|
Similar Items
-
Mirror Descent view for Neural Network quantization
by: Ajanthan, T, et al.
Published: (2021) -
Secure inference of quantized neural networks
by: Mehta, Haripriya(Haripriya P.)
Published: (2020) -
Riemannian walk for incremental learning: Understanding forgetting and intransigence
by: Chaudhry, A, et al.
Published: (2018) -
Data parallelism in training sparse neural networks
by: Lee, N, et al.
Published: (2020) -
A signal propagation perspective for pruning neural networks at initialization
by: Lee, N, et al.
Published: (2019)