Raising the bar on the evaluation of out-of-distribution detection
In image classification, a lot of development has happened in detecting out-of-distribution (OoD) data. However, most OoD detection methods are evaluated on a standard set of datasets, arbitrarily different from training data. There is no clear definition of what forms a "good" OoD dataset...
Main Authors: | Mukhoti, J, Lin, T-Y, Chen, B-C, Shah, A, Torr, PHS, Dokania, PK, Lim, S-N |
---|---|
格式: | Conference item |
語言: | English |
出版: |
EEE
2023
|
相似書籍
-
Using mixup as a regularizer can surprisingly improve accuracy and out-of-distribution robustness
由: Pinto, F, et al.
出版: (2023) -
Placing objects in context via inpainting for out-of-distribution segmentation
由: De Jorge, P, et al.
出版: (2024) -
On using focal loss for neural network calibration
由: Mukhoti, J, et al.
出版: (2020) -
Calibrating deep neural networks using focal loss
由: Mukhoti, J, et al.
出版: (2020) -
GDumb: A simple approach that questions our progress in continual learning
由: Prabhu, A, et al.
出版: (2020)