Interpretable Basis Decomposition for Visual Explanation
Explanations of the decisions made by a deep neural network are important for human end-users to be able to understand and diagnose the trustworthiness of the system. Current neural networks used for visual recognition are generally used as black boxes that do not provide any human interpretable jus...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Book |
Language: | English |
Published: |
Springer International Publishing
2019
|
Online Access: | https://hdl.handle.net/1721.1/122673 |
_version_ | 1826191648195870720 |
---|---|
author | Zhou, Bolei Sun, Yiyou Torralba, Antonio Bau, David |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Zhou, Bolei Sun, Yiyou Torralba, Antonio Bau, David |
author_sort | Zhou, Bolei |
collection | MIT |
description | Explanations of the decisions made by a deep neural network are important for human end-users to be able to understand and diagnose the trustworthiness of the system. Current neural networks used for visual recognition are generally used as black boxes that do not provide any human interpretable justification for a prediction. In this work we propose a new framework called Interpretable Basis Decomposition for providing visual explanations for classification networks. By decomposing the neural activations of the input image into semantically interpretable components pre-trained from a large concept corpus, the proposed framework is able to disentangle the evidence encoded in the activation feature vector, and quantify the contribution of each piece of evidence to the final prediction. We apply our framework for providing explanations to several popular networks for visual recognition, and show it is able to explain the predictions given by the networks in a human-interpretable way. The human interpretability of the visual explanations provided by our framework and other recent explanation methods is evaluated through Amazon Mechanical Turk, showing that our framework generates more faithful and interpretable explanations (The code and data are available at https://github.com/CSAILVision/IBD). |
first_indexed | 2024-09-23T08:59:11Z |
format | Book |
id | mit-1721.1/122673 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T08:59:11Z |
publishDate | 2019 |
publisher | Springer International Publishing |
record_format | dspace |
spelling | mit-1721.1/1226732022-09-26T09:39:53Z Interpretable Basis Decomposition for Visual Explanation Zhou, Bolei Sun, Yiyou Torralba, Antonio Bau, David Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Laboratory for Computer Science Explanations of the decisions made by a deep neural network are important for human end-users to be able to understand and diagnose the trustworthiness of the system. Current neural networks used for visual recognition are generally used as black boxes that do not provide any human interpretable justification for a prediction. In this work we propose a new framework called Interpretable Basis Decomposition for providing visual explanations for classification networks. By decomposing the neural activations of the input image into semantically interpretable components pre-trained from a large concept corpus, the proposed framework is able to disentangle the evidence encoded in the activation feature vector, and quantify the contribution of each piece of evidence to the final prediction. We apply our framework for providing explanations to several popular networks for visual recognition, and show it is able to explain the predictions given by the networks in a human-interpretable way. The human interpretability of the visual explanations provided by our framework and other recent explanation methods is evaluated through Amazon Mechanical Turk, showing that our framework generates more faithful and interpretable explanations (The code and data are available at https://github.com/CSAILVision/IBD). United States. Defense Advanced Research Projects Agency (Contract FA8750-18-C0004) National Science Foundation (Grant 1524817) 2019-11-01T15:27:17Z 2019-11-01T15:27:17Z 2018-10 2018-09 2019-07-11T17:00:17Z Book http://purl.org/eprint/type/ConferencePaper 9783030012366 9783030012373 0302-9743 1611-3349 https://hdl.handle.net/1721.1/122673 Zhou, Bolei et al. "Interpretable Basis Decomposition for Visual Explanation." European Conference on Computer Vision, September 2018, Munich, Germany, Springer Nature, October 2018 © 2018 Springer Nature en http://dx.doi.org/10.1007/978-3-030-01237-3_8 European Conference on Computer Vision Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Springer International Publishing MIT web domain |
spellingShingle | Zhou, Bolei Sun, Yiyou Torralba, Antonio Bau, David Interpretable Basis Decomposition for Visual Explanation |
title | Interpretable Basis Decomposition for Visual Explanation |
title_full | Interpretable Basis Decomposition for Visual Explanation |
title_fullStr | Interpretable Basis Decomposition for Visual Explanation |
title_full_unstemmed | Interpretable Basis Decomposition for Visual Explanation |
title_short | Interpretable Basis Decomposition for Visual Explanation |
title_sort | interpretable basis decomposition for visual explanation |
url | https://hdl.handle.net/1721.1/122673 |
work_keys_str_mv | AT zhoubolei interpretablebasisdecompositionforvisualexplanation AT sunyiyou interpretablebasisdecompositionforvisualexplanation AT torralbaantonio interpretablebasisdecompositionforvisualexplanation AT baudavid interpretablebasisdecompositionforvisualexplanation |