Learn to pay attention
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and ou...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Published: |
International Conference on Learning Representations
2018
|
_version_ | 1811139441357488128 |
---|---|
author | Jetley, S Lord, NA Lee, N Torr, PH |
author_facet | Jetley, S Lord, NA Lee, N Torr, PH |
author_sort | Jetley, S |
collection | OXFORD |
description | We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack. |
first_indexed | 2024-03-07T06:15:54Z |
format | Conference item |
id | oxford-uuid:f11213ef-56d1-4706-a67b-d0c23f6002b3 |
institution | University of Oxford |
last_indexed | 2024-09-25T04:06:08Z |
publishDate | 2018 |
publisher | International Conference on Learning Representations |
record_format | dspace |
spelling | oxford-uuid:f11213ef-56d1-4706-a67b-d0c23f6002b32024-05-17T12:54:32ZLearn to pay attentionConference itemhttp://purl.org/coar/resource_type/c_5794uuid:f11213ef-56d1-4706-a67b-d0c23f6002b3Symplectic Elements at OxfordInternational Conference on Learning Representations2018Jetley, SLord, NALee, NTorr, PHWe propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack. |
spellingShingle | Jetley, S Lord, NA Lee, N Torr, PH Learn to pay attention |
title | Learn to pay attention |
title_full | Learn to pay attention |
title_fullStr | Learn to pay attention |
title_full_unstemmed | Learn to pay attention |
title_short | Learn to pay attention |
title_sort | learn to pay attention |
work_keys_str_mv | AT jetleys learntopayattention AT lordna learntopayattention AT leen learntopayattention AT torrph learntopayattention |