Dropout distillation for efficiently estimating model confidence

We propose an efficient way to output better calibrated uncertainty scores from neural networks. The Distilled Dropout Network (DDN) makes standard (non-Bayesian) neural networks more introspective by adding a new training loss which prevents them from being overconfident. Our method is more efficie...

Full description

Bibliographic Details
Main Authors: Gurau, C, Bewley, A, Posner, H
Format: Journal article
Published: 2018