Do deep generative models know what they don't know?

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. G...

Full description

Bibliographic Details
Main Authors: Nalisnick, E, Matsukawa, A, Teh, Y, Gorur, D, Lakshminarayanan, B
Format: Conference item
Published: 2019