Chained generalisation bounds

This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique. By developing a general theoretical framework, we establish a duality between generalisation bounds based on the regularity of the loss function,...

Full description

Bibliographic Details
Main Authors: Clerico, E, Shidani, A, Deligiannidis, G, Doucet, A
Format: Conference item
Language:English
Published: Proceedings of Machine Learning Research 2022
_version_ 1826308607864471552
author Clerico, E
Shidani, A
Deligiannidis, G
Doucet, A
author_facet Clerico, E
Shidani, A
Deligiannidis, G
Doucet, A
author_sort Clerico, E
collection OXFORD
description This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique. By developing a general theoretical framework, we establish a duality between generalisation bounds based on the regularity of the loss function, and their chained counterparts, which can be obtained by lifting the regularity assumption from the loss onto its gradient. This allows us to re-derive the chaining mutual information bound from the literature, and to obtain novel chained information-theoretic generalisation bounds, based on the Wasserstein distance and other probability metrics. We show on some toy examples that the chained generalisation bound can be significantly tighter than its standard counterpart, particularly when the distribution of the hypotheses selected by the algorithm is very concentrated.
first_indexed 2024-03-07T07:21:52Z
format Conference item
id oxford-uuid:ccb97caf-ec4a-41e9-bfc6-04c1d5b185e7
institution University of Oxford
language English
last_indexed 2024-03-07T07:21:52Z
publishDate 2022
publisher Proceedings of Machine Learning Research
record_format dspace
spelling oxford-uuid:ccb97caf-ec4a-41e9-bfc6-04c1d5b185e72022-10-14T11:10:49ZChained generalisation boundsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:ccb97caf-ec4a-41e9-bfc6-04c1d5b185e7EnglishSymplectic ElementsProceedings of Machine Learning Research2022Clerico, EShidani, ADeligiannidis, GDoucet, AThis work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique. By developing a general theoretical framework, we establish a duality between generalisation bounds based on the regularity of the loss function, and their chained counterparts, which can be obtained by lifting the regularity assumption from the loss onto its gradient. This allows us to re-derive the chaining mutual information bound from the literature, and to obtain novel chained information-theoretic generalisation bounds, based on the Wasserstein distance and other probability metrics. We show on some toy examples that the chained generalisation bound can be significantly tighter than its standard counterpart, particularly when the distribution of the hypotheses selected by the algorithm is very concentrated.
spellingShingle Clerico, E
Shidani, A
Deligiannidis, G
Doucet, A
Chained generalisation bounds
title Chained generalisation bounds
title_full Chained generalisation bounds
title_fullStr Chained generalisation bounds
title_full_unstemmed Chained generalisation bounds
title_short Chained generalisation bounds
title_sort chained generalisation bounds
work_keys_str_mv AT clericoe chainedgeneralisationbounds
AT shidania chainedgeneralisationbounds
AT deligiannidisg chainedgeneralisationbounds
AT douceta chainedgeneralisationbounds