Vulnerability analysis on noise-injection based hardware attack on deep neural networks

Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorith...

Full description

Bibliographic Details
Main Authors: Liu, Wenye, Wang, Si, Chang, Chip-Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference Paper
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/136863
_version_ 1826117647764291584
author Liu, Wenye
Wang, Si
Chang, Chip-Hong
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Liu, Wenye
Wang, Si
Chang, Chip-Hong
author_sort Liu, Wenye
collection NTU
description Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorithms only concentrate on the inputs of the model, effect of tampering internal nodes is seldom studied. Adversarial attack, if extends to deployed hardware system, can perturb or alter intermediate data during real time processing. To investigate the vulnerability implication of deep neural network hardware under potential adversarial attacks, we comprehensively evaluate 10 popular DNN models by injecting noise into each layer of these models. Our experimental results indicate that more accurate networks are more prone to disturbance of selective internal layers. For traditional convolutional network structures (AlexNet and VGG family), the last convolution layer is most assailable. For state-of-the-art architectures (Inception, ResNet and DenseNet families), as little as 0.1\% or one element per channel of perturbations can subvert the original predictions, and over 65\% of computational layers suffer from this vulnerability. Our findings reveal that optimization of accuracy, model size and computational efficiency can unconsciously sacrifice the robustness of deep learning system.
first_indexed 2024-10-01T04:30:44Z
format Conference Paper
id ntu-10356/136863
institution Nanyang Technological University
language English
last_indexed 2024-10-01T04:30:44Z
publishDate 2020
record_format dspace
spelling ntu-10356/1368632020-02-03T01:55:56Z Vulnerability analysis on noise-injection based hardware attack on deep neural networks Liu, Wenye Wang, Si Chang, Chip-Hong School of Electrical and Electronic Engineering 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST) Centre for Integrated Circuits and Systems Engineering::Electrical and electronic engineering Deep Neural Networks Hardware Attacks Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorithms only concentrate on the inputs of the model, effect of tampering internal nodes is seldom studied. Adversarial attack, if extends to deployed hardware system, can perturb or alter intermediate data during real time processing. To investigate the vulnerability implication of deep neural network hardware under potential adversarial attacks, we comprehensively evaluate 10 popular DNN models by injecting noise into each layer of these models. Our experimental results indicate that more accurate networks are more prone to disturbance of selective internal layers. For traditional convolutional network structures (AlexNet and VGG family), the last convolution layer is most assailable. For state-of-the-art architectures (Inception, ResNet and DenseNet families), as little as 0.1\% or one element per channel of perturbations can subvert the original predictions, and over 65\% of computational layers suffer from this vulnerability. Our findings reveal that optimization of accuracy, model size and computational efficiency can unconsciously sacrifice the robustness of deep learning system. MOE (Min. of Education, S’pore) Accepted version 2020-02-03T01:55:56Z 2020-02-03T01:55:56Z 2019 Conference Paper Liu, W., Wang, S. & Chang, C.-H. (2019). Vulnerability analysis on noise-injection based hardware attack on deep neural networks. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). https://hdl.handle.net/10356/136863 en MOE-2015-T2-2-013 © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. application/pdf
spellingShingle Engineering::Electrical and electronic engineering
Deep Neural Networks
Hardware Attacks
Liu, Wenye
Wang, Si
Chang, Chip-Hong
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_full Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_fullStr Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_full_unstemmed Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_short Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_sort vulnerability analysis on noise injection based hardware attack on deep neural networks
topic Engineering::Electrical and electronic engineering
Deep Neural Networks
Hardware Attacks
url https://hdl.handle.net/10356/136863
work_keys_str_mv AT liuwenye vulnerabilityanalysisonnoiseinjectionbasedhardwareattackondeepneuralnetworks
AT wangsi vulnerabilityanalysisonnoiseinjectionbasedhardwareattackondeepneuralnetworks
AT changchiphong vulnerabilityanalysisonnoiseinjectionbasedhardwareattackondeepneuralnetworks