Looking at the posterior: accuracy and uncertainty of neural-network predictions
Bayesian inference can quantify uncertainty in the predictions of neural networks using posterior distributions for model parameters and network output. By looking at these posterior distributions, one can separate the origin of uncertainty into aleatoric and epistemic contributions. One goal of unc...
Main Authors: | Hampus Linander, Oleksandr Balabanov, Henry Yang, Bernhard Mehlig |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2023-01-01
|
Series: | Machine Learning: Science and Technology |
Subjects: | |
Online Access: | https://doi.org/10.1088/2632-2153/ad0ab4 |
Similar Items
-
Sparsifying priors for Bayesian uncertainty quantification in model discovery
by: Seth M. Hirsh, et al.
Published: (2022-02-01) -
Uncertainty Quantification for MLP-Mixer Using Bayesian Deep Learning
by: Abdullah A. Abdullah, et al.
Published: (2023-04-01) -
Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
by: Jan Steinbrener, et al.
Published: (2020-10-01) -
Uncertainty Quantification in Classifying Complex Geological Facies Using Bayesian Deep Learning
by: Touhid Mohammad Hossain, et al.
Published: (2022-01-01) -
Improved Uncertainty Quantification for Neural Networks With Bayesian Last Layer
by: Felix Fiedler, et al.
Published: (2023-01-01)