Progressive skeletonization: trimming more fat from a network at initialization
Recent studies have shown that skeletonization (pruning parameters) of networks at initialization provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance. However, we observe that beyond a certain level of sparsity (appro...
Автори: | de Jorge, P, Sanyal, A, Behl, HS, Torr, PHS, Rogez, G, Dokania, PK |
---|---|
Формат: | Conference item |
Мова: | English |
Опубліковано: |
OpenReview
2020
|
Схожі ресурси
-
Make some noise: reliable and efficient single-step adversarial training
за авторством: de Jorge, P, та інші
Опубліковано: (2023) -
Placing objects in context via inpainting for out-of-distribution segmentation
за авторством: De Jorge, P, та інші
Опубліковано: (2024) -
GDumb: A simple approach that questions our progress in continual learning
за авторством: Prabhu, A, та інші
Опубліковано: (2020) -
On using focal loss for neural network calibration
за авторством: Mukhoti, J, та інші
Опубліковано: (2020) -
Calibrating deep neural networks using focal loss
за авторством: Mukhoti, J, та інші
Опубліковано: (2020)