Understanding the role of individual units in a deep neural network

Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidde...

Full description

Bibliographic Details
Main Authors: Bau, David, Zhu, Jun-Yan, Strobelt, Hendrik, Lapedriza Garcia, Agata, Zhou, Bolei, Torralba, Antonio
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: Proceedings of the National Academy of Sciences 2021
Online Access:https://hdl.handle.net/1721.1/130269
_version_ 1811095898050002944
author Bau, David
Zhu, Jun-Yan
Strobelt, Hendrik
Lapedriza Garcia, Agata
Zhou, Bolei
Torralba, Antonio
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Bau, David
Zhu, Jun-Yan
Strobelt, Hendrik
Lapedriza Garcia, Agata
Zhou, Bolei
Torralba, Antonio
author_sort Bau, David
collection MIT
description Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
first_indexed 2024-09-23T16:33:04Z
format Article
id mit-1721.1/130269
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T16:33:04Z
publishDate 2021
publisher Proceedings of the National Academy of Sciences
record_format dspace
spelling mit-1721.1/1302692022-09-29T20:06:58Z Understanding the role of individual units in a deep neural network Bau, David Zhu, Jun-Yan Strobelt, Hendrik Lapedriza Garcia, Agata Zhou, Bolei Torralba, Antonio Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Media Laboratory MIT-IBM Watson AI Lab Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing. Defense Advanced Research Projects Agency (Award FA8750-18-C-0004) NSF (Grants 1524817 and BIGDATA-1447476) 2021-03-29T21:05:50Z 2021-03-29T21:05:50Z 2020-09 2019-08 2021-03-16T15:02:16Z Article http://purl.org/eprint/type/JournalArticle 0027-8424 1091-6490 https://hdl.handle.net/1721.1/130269 Bau, David et al. "Understanding the role of individual units in a deep neural network." Proceedings of the National Academy of Sciences 117, 48 (September 2020): 30071-30078 © 2020 National Academy of Sciences en http://dx.doi.org/10.1073/pnas.1907375117 Proceedings of the National Academy of Sciences Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Proceedings of the National Academy of Sciences PNAS
spellingShingle Bau, David
Zhu, Jun-Yan
Strobelt, Hendrik
Lapedriza Garcia, Agata
Zhou, Bolei
Torralba, Antonio
Understanding the role of individual units in a deep neural network
title Understanding the role of individual units in a deep neural network
title_full Understanding the role of individual units in a deep neural network
title_fullStr Understanding the role of individual units in a deep neural network
title_full_unstemmed Understanding the role of individual units in a deep neural network
title_short Understanding the role of individual units in a deep neural network
title_sort understanding the role of individual units in a deep neural network
url https://hdl.handle.net/1721.1/130269
work_keys_str_mv AT baudavid understandingtheroleofindividualunitsinadeepneuralnetwork
AT zhujunyan understandingtheroleofindividualunitsinadeepneuralnetwork
AT strobelthendrik understandingtheroleofindividualunitsinadeepneuralnetwork
AT lapedrizagarciaagata understandingtheroleofindividualunitsinadeepneuralnetwork
AT zhoubolei understandingtheroleofindividualunitsinadeepneuralnetwork
AT torralbaantonio understandingtheroleofindividualunitsinadeepneuralnetwork