Conditionally Gaussian PAC-Bayes

Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mis...

Ful tanımlama

Detaylı Bibliyografya
Asıl Yazarlar: Clerico, E, Deligiannidis, G, Doucet, A
Materyal Türü: Conference item
Dil:English
Baskı/Yayın Bilgisi: Journal of Machine Learning Research 2022
_version_ 1826309127496794112
author Clerico, E
Deligiannidis, G
Doucet, A
author_facet Clerico, E
Deligiannidis, G
Doucet, A
author_sort Clerico, E
collection OXFORD
description Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods.
first_indexed 2024-03-07T07:29:34Z
format Conference item
id oxford-uuid:e08aa158-f6ff-4812-980b-c9e3673606cf
institution University of Oxford
language English
last_indexed 2024-03-07T07:29:34Z
publishDate 2022
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:e08aa158-f6ff-4812-980b-c9e3673606cf2022-12-13T17:26:41ZConditionally Gaussian PAC-BayesConference itemhttp://purl.org/coar/resource_type/c_5794uuid:e08aa158-f6ff-4812-980b-c9e3673606cfEnglishSymplectic ElementsJournal of Machine Learning Research2022Clerico, EDeligiannidis, GDoucet, ARecent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods.
spellingShingle Clerico, E
Deligiannidis, G
Doucet, A
Conditionally Gaussian PAC-Bayes
title Conditionally Gaussian PAC-Bayes
title_full Conditionally Gaussian PAC-Bayes
title_fullStr Conditionally Gaussian PAC-Bayes
title_full_unstemmed Conditionally Gaussian PAC-Bayes
title_short Conditionally Gaussian PAC-Bayes
title_sort conditionally gaussian pac bayes
work_keys_str_mv AT clericoe conditionallygaussianpacbayes
AT deligiannidisg conditionallygaussianpacbayes
AT douceta conditionallygaussianpacbayes