Interpretable Basis Decomposition for Visual Explanation
Explanations of the decisions made by a deep neural network are important for human end-users to be able to understand and diagnose the trustworthiness of the system. Current neural networks used for visual recognition are generally used as black boxes that do not provide any human interpretable jus...
Main Authors: | Zhou, Bolei, Sun, Yiyou, Torralba, Antonio, Bau, David |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Book |
Language: | English |
Published: |
Springer International Publishing
2019
|
Online Access: | https://hdl.handle.net/1721.1/122673 |
Similar Items
-
Interpreting Deep Visual Representations via Network Dissection
by: Zhou, Bolei, et al.
Published: (2019) -
Network dissection: quantifying interpretability of deep visual representations
by: Bau, David, et al.
Published: (2020) -
Interpretable representation learning for visual intelligence
by: Zhou, Bolei
Published: (2018) -
Single image intrinsic decomposition without a single intrinsic image
by: Ma, Wei-Chiu, et al.
Published: (2020) -
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books
by: Zhu, Yukun, et al.
Published: (2017)