Understanding the effects of data parallelism and sparsity on neural network training
We study two factors in neural network training: data parallelism and sparsity; here, data parallelism means processing training data in parallel using distributed systems (or equivalently increasing batch size), so that training can be accelerated; for sparsity, we refer to pruning parameters in a...
Main Authors: | Lee, N, Ajanthan, T, Torr, PHS, Jaggi, M |
---|---|
Format: | Conference item |
Language: | English |
Published: |
OpenReview
2020
|
Similar Items
-
Data parallelism in training sparse neural networks
by: Lee, N, et al.
Published: (2020) -
A signal propagation perspective for pruning neural networks at initialization
by: Lee, N, et al.
Published: (2019) -
Mirror Descent view for Neural Network quantization
by: Ajanthan, T, et al.
Published: (2021) -
Exploiting sparsity for neural network verification
by: Newton, M, et al.
Published: (2021) -
Multi-Step Training Framework Using Sparsity Training for Efficient Utilization of Accumulated New Data in Convolutional Neural Networks
by: Jeong Jun Lee, et al.
Published: (2023-01-01)