Make some noise: reliable and efficient single-step adversarial training
Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prio...
Main Authors: | de Jorge, P, Bibi, A, Volpi, R, Sanyal, A, Torr, PHS, Rogez, G, Dokania, PK |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Curran Associates
2023
|
Similar Items
-
Placing objects in context via inpainting for out-of-distribution segmentation
by: De Jorge, P, et al.
Published: (2024) -
Progressive skeletonization: trimming more fat from a network at initialization
by: de Jorge, P, et al.
Published: (2020) -
GDumb: A simple approach that questions our progress in continual learning
by: Prabhu, A, et al.
Published: (2020) -
Discovering class-specific pixels for weakly-supervised semantic segmentation
by: Chaudhry, A, et al.
Published: (2017) -
On using focal loss for neural network calibration
by: Mukhoti, J, et al.
Published: (2020)