Raising the bar on the evaluation of out-of-distribution detection
In image classification, a lot of development has happened in detecting out-of-distribution (OoD) data. However, most OoD detection methods are evaluated on a standard set of datasets, arbitrarily different from training data. There is no clear definition of what forms a "good" OoD dataset...
Main Authors: | Mukhoti, J, Lin, T-Y, Chen, B-C, Shah, A, Torr, PHS, Dokania, PK, Lim, S-N |
---|---|
Format: | Conference item |
Jezik: | English |
Izdano: |
EEE
2023
|
Podobne knjige/članki
-
Using mixup as a regularizer can surprisingly improve accuracy and out-of-distribution robustness
od: Pinto, F, et al.
Izdano: (2023) -
Placing objects in context via inpainting for out-of-distribution segmentation
od: De Jorge, P, et al.
Izdano: (2024) -
On using focal loss for neural network calibration
od: Mukhoti, J, et al.
Izdano: (2020) -
Calibrating deep neural networks using focal loss
od: Mukhoti, J, et al.
Izdano: (2020) -
GDumb: A simple approach that questions our progress in continual learning
od: Prabhu, A, et al.
Izdano: (2020)