Adversarial Examples and Distribution Shift: A Representations Perspective
Adversarial attacks cause machine learning models to produce wrong predictions by minimally perturbing their input. In this thesis, we take a step towards understanding how these perturbations affect the intermediate data representations of the model. Specifically, we compare standard and adversaria...
Main Author: | Nadhamuni, Kaveri |
---|---|
Other Authors: | Madry, Aleksander |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/138945 |
Similar Items
-
Adversarial Examples in Simpler Settings
by: Wang, Tony T.
Published: (2022) -
Adversarial examples are not bugs, they are features
by: Ilyas, A, et al.
Published: (2021) -
Defence on unrestricted adversarial examples
by: Chan, Jarod Yan Cheng
Published: (2021) -
Adversarial examples in neural networks
by: Lim, Ruihong
Published: (2024) -
Defense on unrestricted adversarial examples
by: Sim, Chee Xian
Published: (2023)