Enhancing Deep Neural Network Saliency Visualizations With Gradual Extrapolation
In this paper, an enhancement technique for the class activation mapping methods such as gradient-weighted class activation maps or excitation backpropagation is proposed to present the visual explanations of decisions from convolutional neural network-based models. The proposed idea, called Gradual...
Main Author: | Tomasz Szandala |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9468713/ |
Similar Items
-
Feature-Based Interpretation of the Deep Neural Network
by: Eun-Hun Lee, et al.
Published: (2021-11-01) -
Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices
by: Pavlo Radiuk, et al.
Published: (2024-03-01) -
Explainability and Evaluation of Vision Transformers: An In-Depth Experimental Study
by: Sédrick Stassin, et al.
Published: (2023-12-01) -
Explaining Deep Learning-Based Traffic Classification Using a Genetic Algorithm
by: Seyoung Ahn, et al.
Published: (2021-01-01) -
Explainable Deep Learning Models With Gradient-Weighted Class Activation Mapping for Smart Agriculture
by: Luyl-Da Quach, et al.
Published: (2023-01-01)