Understanding the landscape of adversarial robustness

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.

Bibliographic Details
Main Author: Engstrom, Logan(Logan G.)
Other Authors: Aleksander Mądry.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/123021
_version_ 1811070433755136000
author Engstrom, Logan(Logan G.)
author2 Aleksander Mądry.
author_facet Aleksander Mądry.
Engstrom, Logan(Logan G.)
author_sort Engstrom, Logan(Logan G.)
collection MIT
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
first_indexed 2024-09-23T08:35:57Z
format Thesis
id mit-1721.1/123021
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T08:35:57Z
publishDate 2019
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1230212019-11-22T03:45:20Z Understanding the landscape of adversarial robustness Engstrom, Logan(Logan G.) Aleksander Mądry. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 108-115). Despite their performance on standard tasks in computer vision, natural language processing and voice recognition, state-of-the-art models are pervasively vulnerable to adversarial examples. Adversarial examples are inputs that have been slightly perturbed--such that the semantic content is the same--as to cause malicious behavior in a classifier. The study of adversarial robustness has so far largely focused on perturbations bound in l[subscript p]-norms, in the case where the attacker knows the full model and controls exactly what input is sent to the classifier. However, this threat model is unrealistic in many respects. Models are vulnerable to classes of slight perturbations that are not captured by l[subscript p] bounds, adversaries realistically often will not have full model access, and in the physical world it is not possible to exactly control what image is sent to the classifier. In our exploration we successfully develop new algorithms and frameworks for exploiting vulnerabilities even in restricted threat models. We find that models are highly vulnerable to adversarial examples in these more realistic threat models, highlighting the necessity of further research to attain models that are truly robust and reliable. by Logan Engstrom. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2019-11-22T00:02:48Z 2019-11-22T00:02:48Z 2019 2019 Thesis https://hdl.handle.net/1721.1/123021 1127640126 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 149 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Engstrom, Logan(Logan G.)
Understanding the landscape of adversarial robustness
title Understanding the landscape of adversarial robustness
title_full Understanding the landscape of adversarial robustness
title_fullStr Understanding the landscape of adversarial robustness
title_full_unstemmed Understanding the landscape of adversarial robustness
title_short Understanding the landscape of adversarial robustness
title_sort understanding the landscape of adversarial robustness
topic Electrical Engineering and Computer Science.
url https://hdl.handle.net/1721.1/123021
work_keys_str_mv AT engstromloganlogang understandingthelandscapeofadversarialrobustness