Tight certificates of adversarial robustness for randomly smoothed classifiers

Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an `2 bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the...

Full description

Bibliographic Details
Main Authors: Lee, Guang-He, Yuan, Yang, Jaakkola, Tommi S
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/129439
_version_ 1826200792171806720
author Lee, Guang-He
Yuan, Yang
Jaakkola, Tommi S
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Lee, Guang-He
Yuan, Yang
Jaakkola, Tommi S
author_sort Lee, Guang-He
collection MIT
description Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an `2 bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest. We build on and considerably expand this work across broad classes of distributions. In particular, we offer adversarial robustness guarantees and associated algorithms for the discrete case where the adversary is `0 bounded. Moreover, we exemplify how the guarantees can be tightened with specific assumptions about the function class of the classifier such as a decision tree. We empirically illustrate these results with and without functional restrictions across image and molecule datasets.
first_indexed 2024-09-23T11:41:29Z
format Article
id mit-1721.1/129439
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T11:41:29Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1294392022-09-27T21:17:51Z Tight certificates of adversarial robustness for randomly smoothed classifiers Lee, Guang-He Yuan, Yang Jaakkola, Tommi S Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an `2 bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest. We build on and considerably expand this work across broad classes of distributions. In particular, we offer adversarial robustness guarantees and associated algorithms for the discrete case where the adversary is `0 bounded. Moreover, we exemplify how the guarantees can be tightened with specific assumptions about the function class of the classifier such as a decision tree. We empirically illustrate these results with and without functional restrictions across image and molecule datasets. 2021-01-19T15:26:13Z 2021-01-19T15:26:13Z 2020-02 2019-06 2020-12-21T16:05:25Z Article http://purl.org/eprint/type/ConferencePaper 1049-5258 https://hdl.handle.net/1721.1/129439 Lee, Guang-He et al. “Tight certificates of adversarial robustness for randomly smoothed classifiers.”32nd Conference on Neural Information Processing Systems, December 2018, Montreal, Canada, Neural Information Processing Systems, 2018. © 2018 The Author(s) en https://papers.nips.cc/paper/2019/hash/fa2e8c4385712f9a1d24c363a2cbe5b8-Abstract.html 32nd Conference on Neural Information Processing Systems (NeurIPS 2018) Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems (NIPS)
spellingShingle Lee, Guang-He
Yuan, Yang
Jaakkola, Tommi S
Tight certificates of adversarial robustness for randomly smoothed classifiers
title Tight certificates of adversarial robustness for randomly smoothed classifiers
title_full Tight certificates of adversarial robustness for randomly smoothed classifiers
title_fullStr Tight certificates of adversarial robustness for randomly smoothed classifiers
title_full_unstemmed Tight certificates of adversarial robustness for randomly smoothed classifiers
title_short Tight certificates of adversarial robustness for randomly smoothed classifiers
title_sort tight certificates of adversarial robustness for randomly smoothed classifiers
url https://hdl.handle.net/1721.1/129439
work_keys_str_mv AT leeguanghe tightcertificatesofadversarialrobustnessforrandomlysmoothedclassifiers
AT yuanyang tightcertificatesofadversarialrobustnessforrandomlysmoothedclassifiers
AT jaakkolatommis tightcertificatesofadversarialrobustnessforrandomlysmoothedclassifiers