The Effects of Image Distribution and Task on Adversarial Robustness

In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) that facilitates unbiased comparisons across models when they have different init...

Full description

Bibliographic Details
Main Authors: Kunhardt, Owen, Deza, Arturo, Poggio, Tomaso
Format: Technical Report
Published: Center for Brains, Minds and Machines (CBMM) 2021
Online Access:https://hdl.handle.net/1721.1/129813
_version_ 1826200770690678784
author Kunhardt, Owen
Deza, Arturo
Poggio, Tomaso
author_facet Kunhardt, Owen
Deza, Arturo
Poggio, Tomaso
author_sort Kunhardt, Owen
collection MIT
description In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) that facilitates unbiased comparisons across models when they have different initial ε_0 performance. This can be used to determine how adversarially robust a model is to different image distributions or task (or some other variable); and/or to measure how robust a model is comparatively to other models. We used this adversarial robustness metric on models of an MNIST, CIFAR-10, and a Fusion dataset (CIFAR-10 + MNIST) where trained models performed either a digit or object recognition task using a LeNet, ResNet50, or a fully connected network (FullyConnectedNet) architecture and found the following: 1) CIFAR-10 models are inherently less adversarially robust than MNIST models; 2) Both the image distribution and task that a model is trained on can affect the adversarial robustness of the resultant model. 3) Pretraining with a different image distribution and task sometimes carries over the adversarial robustness induced by that image distribution and task in the resultant model; Collectively, our results imply non-trivial differences of the learned representation space of one perceptual system over another given its exposure to different image statistics or tasks (mainly objects vs digits). Moreover, these results hold even when model systems are equalized to have the same level of performance, or when exposed to approximately matched image statistics of fusion images but with different tasks.
first_indexed 2024-09-23T11:41:29Z
format Technical Report
id mit-1721.1/129813
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T11:41:29Z
publishDate 2021
publisher Center for Brains, Minds and Machines (CBMM)
record_format dspace
spelling mit-1721.1/1298132021-02-19T03:21:00Z The Effects of Image Distribution and Task on Adversarial Robustness Kunhardt, Owen Deza, Arturo Poggio, Tomaso In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) that facilitates unbiased comparisons across models when they have different initial ε_0 performance. This can be used to determine how adversarially robust a model is to different image distributions or task (or some other variable); and/or to measure how robust a model is comparatively to other models. We used this adversarial robustness metric on models of an MNIST, CIFAR-10, and a Fusion dataset (CIFAR-10 + MNIST) where trained models performed either a digit or object recognition task using a LeNet, ResNet50, or a fully connected network (FullyConnectedNet) architecture and found the following: 1) CIFAR-10 models are inherently less adversarially robust than MNIST models; 2) Both the image distribution and task that a model is trained on can affect the adversarial robustness of the resultant model. 3) Pretraining with a different image distribution and task sometimes carries over the adversarial robustness induced by that image distribution and task in the resultant model; Collectively, our results imply non-trivial differences of the learned representation space of one perceptual system over another given its exposure to different image statistics or tasks (mainly objects vs digits). Moreover, these results hold even when model systems are equalized to have the same level of performance, or when exposed to approximately matched image statistics of fusion images but with different tasks. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216. 2021-02-18T15:32:37Z 2021-02-18T15:32:37Z 2021-02-18 Technical Report Working Paper Other https://hdl.handle.net/1721.1/129813 CBMM Memo;116 application/pdf Center for Brains, Minds and Machines (CBMM)
spellingShingle Kunhardt, Owen
Deza, Arturo
Poggio, Tomaso
The Effects of Image Distribution and Task on Adversarial Robustness
title The Effects of Image Distribution and Task on Adversarial Robustness
title_full The Effects of Image Distribution and Task on Adversarial Robustness
title_fullStr The Effects of Image Distribution and Task on Adversarial Robustness
title_full_unstemmed The Effects of Image Distribution and Task on Adversarial Robustness
title_short The Effects of Image Distribution and Task on Adversarial Robustness
title_sort effects of image distribution and task on adversarial robustness
url https://hdl.handle.net/1721.1/129813
work_keys_str_mv AT kunhardtowen theeffectsofimagedistributionandtaskonadversarialrobustness
AT dezaarturo theeffectsofimagedistributionandtaskonadversarialrobustness
AT poggiotomaso theeffectsofimagedistributionandtaskonadversarialrobustness
AT kunhardtowen effectsofimagedistributionandtaskonadversarialrobustness
AT dezaarturo effectsofimagedistributionandtaskonadversarialrobustness
AT poggiotomaso effectsofimagedistributionandtaskonadversarialrobustness