Conditionally Gaussian PAC-Bayes
Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mis...
Main Authors: | Clerico, E, Deligiannidis, G, Doucet, A |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Journal of Machine Learning Research
2022
|
Similar Items
-
Chained generalisation bounds
by: Clerico, E, et al.
Published: (2022) -
On PAC-Bayesian reconstruction guarantees for VAEs
by: Cherief-Abdellatif, B-E, et al.
Published: (2022) -
Tight bounds for the expected risk of linear classifiers and PAC-bayes finite-sample guarantees
by: Honorio Carrillo, Jean, et al.
Published: (2018) -
Variational Bayes for Non-Gaussian autoregressive models
by: Penny, W, et al.
Published: (2000) -
Stable ResNet
by: Hayou, S, et al.
Published: (2021)