Using mixup as a regularizer can surprisingly improve accuracy and out-of-distribution robustness
We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only improves accuracy but also significantly improves the...
Main Authors: | Pinto, F, Yang, H, Lim, SN, Torr, PHS, Dokania, PK |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Curran Associates, Inc
2023
|
Similar Items
-
Mix-MaxEnt: improving accuracy and uncertainty estimates of deterministic neural networks
by: Pinto, F, et al.
Published: (2021) -
Placing objects in context via inpainting for out-of-distribution segmentation
by: De Jorge, P, et al.
Published: (2024) -
Raising the bar on the evaluation of out-of-distribution detection
by: Mukhoti, J, et al.
Published: (2023) -
An impartial take to the CNN vs transformer robustness contest
by: Pinto, F, et al.
Published: (2022) -
Are vision transformers always more robust than convolutional neural networks?
by: Pinto, F, et al.
Published: (2021)