Data parallelism in training sparse neural networks

Network pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even a...

Бүрэн тодорхойлолт

Номзүйн дэлгэрэнгүй
Үндсэн зохиолчид: Lee, N, Ajanthan, T, Torr, PHS, Jaggi, M
Формат: Conference item
Хэл сонгох:English
Хэвлэсэн: ICLR 2020