Dropout distillation for efficiently estimating model confidence
We propose an efficient way to output better calibrated uncertainty scores from neural networks. The Distilled Dropout Network (DDN) makes standard (non-Bayesian) neural networks more introspective by adding a new training loss which prevents them from being overconfident. Our method is more efficie...
Main Authors: | , , |
---|---|
Format: | Journal article |
Published: |
2018
|