Data parallelism in training sparse neural networks
Network pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even a...
Main Authors: | Lee, N, Ajanthan, T, Torr, PHS, Jaggi, M |
---|---|
Format: | Conference item |
Language: | English |
Published: |
ICLR
2020
|
Similar Items
-
Understanding the effects of data parallelism and sparsity on neural network training
by: Lee, N, et al.
Published: (2020) -
A signal propagation perspective for pruning neural networks at initialization
by: Lee, N, et al.
Published: (2019) -
Mirror Descent view for Neural Network quantization
by: Ajanthan, T, et al.
Published: (2021) -
Efficient relaxations for dense CRFs with sparse higher-order potentials
by: Joy, T, et al.
Published: (2019) -
Proximal mean-field for neural network quantization
by: Ajanthan, T, et al.
Published: (2020)