Towards deep learning models resistant to adversarial attacks
© Learning Representations, ICLR 2018 - Conference Track Proceedings.All right reserved. Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To addres...
Main Authors: | Madry, A, Makelov, A, Schmidt, L, Tsipras, D, Vladu, A |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | English |
Published: |
2021
|
Online Access: | https://hdl.handle.net/1721.1/137496 |
Similar Items
-
Towards machine learning models robust to adversarial examples and backdoor attacks
by: Makelov, Aleksandar
Published: (2023) -
Adversarial attacks on deep learning
by: Yee, An Qi
Published: (2023) -
Adversarial examples are not bugs, they are features
by: Ilyas, A, et al.
Published: (2021) -
Evaluation of adversarial attacks against deep learning models
by: Lam, Sabrina Jing Wen
Published: (2023) -
Evaluation of adversarial attacks against deep learning models
by: Ta, Anh Duc
Published: (2022)