There and back again: revisiting backpropagation saliency methods

Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample. A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient. Despite much research on such methods, relatively little work has been done...

Full description

Bibliographic Details
Main Authors: Rebuffi, S-A, Fong, R, Ji, X, Vedaldi, A
Format: Conference item
Language:English
Published: IEEE 2020
_version_ 1826294329691340800
author Rebuffi, S-A
Fong, R
Ji, X
Vedaldi, A
author_facet Rebuffi, S-A
Fong, R
Ji, X
Vedaldi, A
author_sort Rebuffi, S-A
collection OXFORD
description Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample. A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient. Despite much research on such methods, relatively little work has been done to clarify the differences between such methods as well as the desiderata of these techniques. Thus, there is a need for rigorously understanding the relationships between different methods as well as their failure modes. In this work, we conduct a thorough analysis of backpropagation-based saliency methods and propose a single framework under which several such methods can be unified. As a result of our study, we make three additional contributions. First, we use our framework to propose NormGrad, a novel saliency method based on the spatial contribution of gradients of convolutional weights. Second, we combine saliency maps at different layers to test the ability of saliency methods to extract complementary information at different network levels (e.g.~trading off spatial resolution and distinctiveness) and we explain why some methods fail at specific layers (e.g., Grad-CAM anywhere besides the last convolutional layer). Third, we introduce a class-sensitivity metric and a meta-learning inspired paradigm applicable to any saliency method for improving sensitivity to the output class being explained.
first_indexed 2024-03-07T03:43:58Z
format Conference item
id oxford-uuid:bed1b84f-25ea-486d-bd2d-aa159f26c33c
institution University of Oxford
language English
last_indexed 2024-03-07T03:43:58Z
publishDate 2020
publisher IEEE
record_format dspace
spelling oxford-uuid:bed1b84f-25ea-486d-bd2d-aa159f26c33c2022-03-27T05:42:55ZThere and back again: revisiting backpropagation saliency methodsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:bed1b84f-25ea-486d-bd2d-aa159f26c33cEnglishSymplectic ElementsIEEE2020Rebuffi, S-AFong, RJi, XVedaldi, ASaliency methods seek to explain the predictions of a model by producing an importance map across each input sample. A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient. Despite much research on such methods, relatively little work has been done to clarify the differences between such methods as well as the desiderata of these techniques. Thus, there is a need for rigorously understanding the relationships between different methods as well as their failure modes. In this work, we conduct a thorough analysis of backpropagation-based saliency methods and propose a single framework under which several such methods can be unified. As a result of our study, we make three additional contributions. First, we use our framework to propose NormGrad, a novel saliency method based on the spatial contribution of gradients of convolutional weights. Second, we combine saliency maps at different layers to test the ability of saliency methods to extract complementary information at different network levels (e.g.~trading off spatial resolution and distinctiveness) and we explain why some methods fail at specific layers (e.g., Grad-CAM anywhere besides the last convolutional layer). Third, we introduce a class-sensitivity metric and a meta-learning inspired paradigm applicable to any saliency method for improving sensitivity to the output class being explained.
spellingShingle Rebuffi, S-A
Fong, R
Ji, X
Vedaldi, A
There and back again: revisiting backpropagation saliency methods
title There and back again: revisiting backpropagation saliency methods
title_full There and back again: revisiting backpropagation saliency methods
title_fullStr There and back again: revisiting backpropagation saliency methods
title_full_unstemmed There and back again: revisiting backpropagation saliency methods
title_short There and back again: revisiting backpropagation saliency methods
title_sort there and back again revisiting backpropagation saliency methods
work_keys_str_mv AT rebuffisa thereandbackagainrevisitingbackpropagationsaliencymethods
AT fongr thereandbackagainrevisitingbackpropagationsaliencymethods
AT jix thereandbackagainrevisitingbackpropagationsaliencymethods
AT vedaldia thereandbackagainrevisitingbackpropagationsaliencymethods