Using mixup as a regularizer can surprisingly improve accuracy and out-of-distribution robustness
We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only improves accuracy but also significantly improves the...
Hoofdauteurs: | Pinto, F, Yang, H, Lim, SN, Torr, PHS, Dokania, PK |
---|---|
Formaat: | Conference item |
Taal: | English |
Gepubliceerd in: |
Curran Associates, Inc
2023
|
Gelijkaardige items
-
Mix-MaxEnt: improving accuracy and uncertainty estimates of deterministic neural networks
door: Pinto, F, et al.
Gepubliceerd in: (2021) -
Placing objects in context via inpainting for out-of-distribution segmentation
door: De Jorge, P, et al.
Gepubliceerd in: (2024) -
An impartial take to the CNN vs transformer robustness contest
door: Pinto, F, et al.
Gepubliceerd in: (2022) -
Raising the bar on the evaluation of out-of-distribution detection
door: Mukhoti, J, et al.
Gepubliceerd in: (2023) -
Are vision transformers always more robust than convolutional neural networks?
door: Pinto, F, et al.
Gepubliceerd in: (2021)