Calibrating deep neural networks using focal loss
Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard to rely on. Ideally, we want networks to be accurate, calibrated and confident. We show that, as opposed to the standard cross-entropy loss, focal loss (L...
Main Authors: | Mukhoti, J, Kulharia, V, Sanyal, A, Golodetz, S, Torr, PHS, Dokania, PK |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Curran Associates
2020
|
Similar Items
-
On using focal loss for neural network calibration
by: Mukhoti, J, et al.
Published: (2020) -
Sample-dependent adaptive temperature scaling for improved calibration
by: Joy, T, et al.
Published: (2023) -
Mirror Descent view for Neural Network quantization
by: Ajanthan, T, et al.
Published: (2021) -
Mix-MaxEnt: improving accuracy and uncertainty estimates of deterministic neural networks
by: Pinto, F, et al.
Published: (2021) -
Stable rank normalization for improved generalization in neural networks and GANs
by: Sanyal, A, et al.
Published: (2020)