Exploring the landscape of backdoor attacks on deep neural network models

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.

Bibliographic Details
Main Author: Turner, Alexander M.,S.M.Massachusetts Institute of Technology.
Other Authors: Aleksander Ma̧dry.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/123127
_version_ 1826193040891445248
author Turner, Alexander M.,S.M.Massachusetts Institute of Technology.
author2 Aleksander Ma̧dry.
author_facet Aleksander Ma̧dry.
Turner, Alexander M.,S.M.Massachusetts Institute of Technology.
author_sort Turner, Alexander M.,S.M.Massachusetts Institute of Technology.
collection MIT
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
first_indexed 2024-09-23T09:33:09Z
format Thesis
id mit-1721.1/123127
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T09:33:09Z
publishDate 2019
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1231272019-12-05T18:04:58Z Exploring the landscape of backdoor attacks on deep neural network models Turner, Alexander M.,S.M.Massachusetts Institute of Technology. Aleksander Ma̧dry. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 71-75). Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs. by Alexander M. Turner. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2019-12-05T18:04:56Z 2019-12-05T18:04:56Z 2019 2019 Thesis https://hdl.handle.net/1721.1/123127 1128278987 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 83 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Turner, Alexander M.,S.M.Massachusetts Institute of Technology.
Exploring the landscape of backdoor attacks on deep neural network models
title Exploring the landscape of backdoor attacks on deep neural network models
title_full Exploring the landscape of backdoor attacks on deep neural network models
title_fullStr Exploring the landscape of backdoor attacks on deep neural network models
title_full_unstemmed Exploring the landscape of backdoor attacks on deep neural network models
title_short Exploring the landscape of backdoor attacks on deep neural network models
title_sort exploring the landscape of backdoor attacks on deep neural network models
topic Electrical Engineering and Computer Science.
url https://hdl.handle.net/1721.1/123127
work_keys_str_mv AT turneralexandermsmmassachusettsinstituteoftechnology exploringthelandscapeofbackdoorattacksondeepneuralnetworkmodels