Using mixup as a regularizer can surprisingly improve accuracy and out-of-distribution robustness
We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only improves accuracy but also significantly improves the...
主要な著者: | Pinto, F, Yang, H, Lim, SN, Torr, PHS, Dokania, PK |
---|---|
フォーマット: | Conference item |
言語: | English |
出版事項: |
Curran Associates, Inc
2023
|
類似資料
-
Mix-MaxEnt: improving accuracy and uncertainty estimates of deterministic neural networks
著者:: Pinto, F, 等
出版事項: (2021) -
Placing objects in context via inpainting for out-of-distribution segmentation
著者:: De Jorge, P, 等
出版事項: (2024) -
An impartial take to the CNN vs transformer robustness contest
著者:: Pinto, F, 等
出版事項: (2022) -
Raising the bar on the evaluation of out-of-distribution detection
著者:: Mukhoti, J, 等
出版事項: (2023) -
Are vision transformers always more robust than convolutional neural networks?
著者:: Pinto, F, 等
出版事項: (2021)