Adversarial examples are not bugs, they are features
© 2019 Neural information processing systems foundation. All rights reserved. Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to th...
Main Authors: | Ilyas, A, Santurkar, S, Tsipras, D, Engstrom, L, Tran, B, Madry, A |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
2021
|
Online Access: | https://hdl.handle.net/1721.1/137500 |
Similar Items
-
Image synthesis with a single (robust) classifier
by: Santurkar, S, et al.
Published: (2021) -
Adversarially Robust Generalization Requires More Data
by: Schmidt, Ludwig, et al.
Published: (2021) -
Image synthesis with a single (robust) classifier
by: Santurkar, Shibani, et al.
Published: (2021) -
How does batch normalization help optimization?
by: Madry, Aleksander, et al.
Published: (2021) -
Towards deep learning models resistant to adversarial attacks
by: Madry, A, et al.
Published: (2021)