Make some noise: reliable and efficient single-step adversarial training
Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prio...
Những tác giả chính: | de Jorge, P, Bibi, A, Volpi, R, Sanyal, A, Torr, PHS, Rogez, G, Dokania, PK |
---|---|
Định dạng: | Conference item |
Ngôn ngữ: | English |
Được phát hành: |
Curran Associates
2023
|
Những quyển sách tương tự
-
Placing objects in context via inpainting for out-of-distribution segmentation
Bằng: De Jorge, P, et al.
Được phát hành: (2024) -
Progressive skeletonization: trimming more fat from a network at initialization
Bằng: de Jorge, P, et al.
Được phát hành: (2020) -
GDumb: A simple approach that questions our progress in continual learning
Bằng: Prabhu, A, et al.
Được phát hành: (2020) -
Discovering class-specific pixels for weakly-supervised semantic segmentation
Bằng: Chaudhry, A, et al.
Được phát hành: (2017) -
On using focal loss for neural network calibration
Bằng: Mukhoti, J, et al.
Được phát hành: (2020)