Certifying ensembles: a general certification theory with s-lipschitzness

Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept...

Full description

Bibliographic Details
Main Authors: Petrov, A, Eiras, F, Sanyal, A, Torr, PHS, Bibi, A
Format: Conference item
Language:English
Published: Journal of Machine Learning Research 2023
_version_ 1797110904874074112
author Petrov, A
Eiras, F
Sanyal, A
Torr, PHS
Bibi, A
author_facet Petrov, A
Eiras, F
Sanyal, A
Torr, PHS
Bibi, A
author_sort Petrov, A
collection OXFORD
description Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.
first_indexed 2024-03-07T08:02:51Z
format Conference item
id oxford-uuid:eb00af37-0baa-43da-a278-73ad22fda6f2
institution University of Oxford
language English
last_indexed 2024-03-07T08:02:51Z
publishDate 2023
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:eb00af37-0baa-43da-a278-73ad22fda6f22023-10-02T07:54:20ZCertifying ensembles: a general certification theory with s-lipschitznessConference itemhttp://purl.org/coar/resource_type/c_5794uuid:eb00af37-0baa-43da-a278-73ad22fda6f2EnglishSymplectic ElementsJournal of Machine Learning Research2023Petrov, AEiras, FSanyal, ATorr, PHSBibi, AImproving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.
spellingShingle Petrov, A
Eiras, F
Sanyal, A
Torr, PHS
Bibi, A
Certifying ensembles: a general certification theory with s-lipschitzness
title Certifying ensembles: a general certification theory with s-lipschitzness
title_full Certifying ensembles: a general certification theory with s-lipschitzness
title_fullStr Certifying ensembles: a general certification theory with s-lipschitzness
title_full_unstemmed Certifying ensembles: a general certification theory with s-lipschitzness
title_short Certifying ensembles: a general certification theory with s-lipschitzness
title_sort certifying ensembles a general certification theory with s lipschitzness
work_keys_str_mv AT petrova certifyingensemblesageneralcertificationtheorywithslipschitzness
AT eirasf certifyingensemblesageneralcertificationtheorywithslipschitzness
AT sanyala certifyingensemblesageneralcertificationtheorywithslipschitzness
AT torrphs certifyingensemblesageneralcertificationtheorywithslipschitzness
AT bibia certifyingensemblesageneralcertificationtheorywithslipschitzness