On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations
Deep convolutional neural networks are generally regarded as robust function approximators. So far, this intuition is based on perturbations to external stimuli such as the images to be classified. Here we explore the robustness of convolutional neural networks to perturbations to the internal weigh...
Main Authors: | , , |
---|---|
Format: | Technical Report |
Language: | en_US |
Published: |
Center for Brains, Minds and Machines (CBMM), arXiv
2017
|
Online Access: | http://hdl.handle.net/1721.1/107935 |
_version_ | 1826194526699520000 |
---|---|
author | Cheney, Nicholas Schrimpf, Martin Kreiman, Gabriel |
author_facet | Cheney, Nicholas Schrimpf, Martin Kreiman, Gabriel |
author_sort | Cheney, Nicholas |
collection | MIT |
description | Deep convolutional neural networks are generally regarded as robust function approximators. So far, this intuition is based on perturbations to external stimuli such as the images to be classified. Here we explore the robustness of convolutional neural networks to perturbations to the internal weights and architecture of the network itself. We show that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutional layers but the bottom convolutional layers are much more fragile. For instance, Alexnet shows less than a 30% decrease in classification performance when randomly removing over 70% of weight connections in the top convolutional or dense layers but performance is almost at chance with the same perturbation in the first convolutional layer. Finally, we suggest further investigations which could continue to inform the robustness of convolutional networks to internal perturbations. |
first_indexed | 2024-09-23T09:57:27Z |
format | Technical Report |
id | mit-1721.1/107935 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T09:57:27Z |
publishDate | 2017 |
publisher | Center for Brains, Minds and Machines (CBMM), arXiv |
record_format | dspace |
spelling | mit-1721.1/1079352019-04-11T09:51:10Z On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations Cheney, Nicholas Schrimpf, Martin Kreiman, Gabriel Deep convolutional neural networks are generally regarded as robust function approximators. So far, this intuition is based on perturbations to external stimuli such as the images to be classified. Here we explore the robustness of convolutional neural networks to perturbations to the internal weights and architecture of the network itself. We show that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutional layers but the bottom convolutional layers are much more fragile. For instance, Alexnet shows less than a 30% decrease in classification performance when randomly removing over 70% of weight connections in the top convolutional or dense layers but performance is almost at chance with the same perturbation in the first convolutional layer. Finally, we suggest further investigations which could continue to inform the robustness of convolutional networks to internal perturbations. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. 2017-04-07T15:25:52Z 2017-04-07T15:25:52Z 2017-04-03 Technical Report Working Paper Other http://hdl.handle.net/1721.1/107935 arXiv:1703.08245 en_US CBMM Memo Series;065 Attribution-NonCommercial-ShareAlike 3.0 United States http://creativecommons.org/licenses/by-nc-sa/3.0/us/ application/pdf Center for Brains, Minds and Machines (CBMM), arXiv |
spellingShingle | Cheney, Nicholas Schrimpf, Martin Kreiman, Gabriel On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations |
title | On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations |
title_full | On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations |
title_fullStr | On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations |
title_full_unstemmed | On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations |
title_short | On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations |
title_sort | on the robustness of convolutional neural networks to internal architecture and weight perturbations |
url | http://hdl.handle.net/1721.1/107935 |
work_keys_str_mv | AT cheneynicholas ontherobustnessofconvolutionalneuralnetworkstointernalarchitectureandweightperturbations AT schrimpfmartin ontherobustnessofconvolutionalneuralnetworkstointernalarchitectureandweightperturbations AT kreimangabriel ontherobustnessofconvolutionalneuralnetworkstointernalarchitectureandweightperturbations |