Data parallelism in training sparse neural networks

Network pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even a...

Full description

Bibliographic Details
Main Authors: Lee, N, Ajanthan, T, Torr, PHS, Jaggi, M
Format: Conference item
Language:English
Published: ICLR 2020
_version_ 1811139291603009536
author Lee, N
Ajanthan, T
Torr, PHS
Jaggi, M
author_facet Lee, N
Ajanthan, T
Torr, PHS
Jaggi, M
author_sort Lee, N
collection OXFORD
description Network pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even at random initialization prior to training; hence the obtained sparse network only needs to be trained. While this approach of pruning at initialization turned out to be highly effective, little has been studied about the training aspects of these sparse neural networks. In this work, we focus on measuring the effects of data parallelism on training sparse neural networks. As a result, we find that the data parallelism in training sparse neural networks is no worse than that in training densely parameterized neural networks, despite the general difficulty of training sparse neural networks. When training sparse networks using SGD with momentum, the breakdown of the perfect scaling regime occurs even much later than the dense at large batch sizes.
first_indexed 2024-03-07T01:09:33Z
format Conference item
id oxford-uuid:8c8679f0-31a1-437c-bf9d-d131a41dab47
institution University of Oxford
language English
last_indexed 2024-09-25T04:03:45Z
publishDate 2020
publisher ICLR
record_format dspace
spelling oxford-uuid:8c8679f0-31a1-437c-bf9d-d131a41dab472024-05-16T16:32:18ZData parallelism in training sparse neural networksConference itemhttp://purl.org/coar/resource_type/c_5794uuid:8c8679f0-31a1-437c-bf9d-d131a41dab47EnglishSymplectic ElementsICLR2020Lee, NAjanthan, TTorr, PHSJaggi, MNetwork pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even at random initialization prior to training; hence the obtained sparse network only needs to be trained. While this approach of pruning at initialization turned out to be highly effective, little has been studied about the training aspects of these sparse neural networks. In this work, we focus on measuring the effects of data parallelism on training sparse neural networks. As a result, we find that the data parallelism in training sparse neural networks is no worse than that in training densely parameterized neural networks, despite the general difficulty of training sparse neural networks. When training sparse networks using SGD with momentum, the breakdown of the perfect scaling regime occurs even much later than the dense at large batch sizes.
spellingShingle Lee, N
Ajanthan, T
Torr, PHS
Jaggi, M
Data parallelism in training sparse neural networks
title Data parallelism in training sparse neural networks
title_full Data parallelism in training sparse neural networks
title_fullStr Data parallelism in training sparse neural networks
title_full_unstemmed Data parallelism in training sparse neural networks
title_short Data parallelism in training sparse neural networks
title_sort data parallelism in training sparse neural networks
work_keys_str_mv AT leen dataparallelismintrainingsparseneuralnetworks
AT ajanthant dataparallelismintrainingsparseneuralnetworks
AT torrphs dataparallelismintrainingsparseneuralnetworks
AT jaggim dataparallelismintrainingsparseneuralnetworks