Towards deep learning models resistant to adversarial attacks

© Learning Representations, ICLR 2018 - Conference Track Proceedings.All right reserved. Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To addres...

Full description

Bibliographic Details
Main Authors: Madry, A, Makelov, A, Schmidt, L, Tsipras, D, Vladu, A
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/137496

Similar Items