Where Should Saliency Models Look Next?

Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datas...

Full description

Bibliographic Details
Main Authors: Borji, Ali, Bylinskii, Zoya, Recasens Continente, Adria, Oliva, Aude, Torralba, Antonio, Durand, Frederic
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:en_US
Published: Springer 2018
Online Access:http://hdl.handle.net/1721.1/113344
https://orcid.org/0000-0003-0941-9863
https://orcid.org/0000-0003-4915-0256
https://orcid.org/0000-0001-9919-069X
_version_ 1826206002051022848
author Borji, Ali
Bylinskii, Zoya
Recasens Continente, Adria
Oliva, Aude
Torralba, Antonio
Durand, Frederic
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Borji, Ali
Bylinskii, Zoya
Recasens Continente, Adria
Oliva, Aude
Torralba, Antonio
Durand, Frederic
author_sort Borji, Ali
collection MIT
description Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a fine-grained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding. Keywords: Saliency maps, Saliency estimation, Eye movements, Deep learning, Image understanding
first_indexed 2024-09-23T13:22:26Z
format Article
id mit-1721.1/113344
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:22:26Z
publishDate 2018
publisher Springer
record_format dspace
spelling mit-1721.1/1133442022-10-01T14:51:16Z Where Should Saliency Models Look Next? Borji, Ali Bylinskii, Zoya Recasens Continente, Adria Oliva, Aude Torralba, Antonio Durand, Frederic Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Bylinskii, Zoya Recasens Continente, Adria Oliva, Aude Torralba, Antonio Durand, Frederic Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a fine-grained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding. Keywords: Saliency maps, Saliency estimation, Eye movements, Deep learning, Image understanding Natural Sciences and Engineering Research Council of Canada (Postgraduate Scholarships-Doctoral Fellowship) Fundación Obra Social de La Caixa (Fellowship) National Science Foundation (U.S.) (Grant 1524817) Toyota Motor Corporation (Grant) 2018-01-30T15:39:06Z 2018-01-30T15:39:06Z 2016-09 Article http://purl.org/eprint/type/ConferencePaper 978-3-319-46453-4 978-3-319-46454-1 0302-9743 1611-3349 http://hdl.handle.net/1721.1/113344 Bylinskii, Zoya, et al. “Where Should Saliency Models Look Next?” Computer Vision – ECCV 2016, edited by Bastian Leibe et al., vol. 9909, Springer International Publishing, 2016, pp. 809–24. https://orcid.org/0000-0003-0941-9863 https://orcid.org/0000-0003-4915-0256 https://orcid.org/0000-0001-9919-069X en_US http://dx.doi.org/10.1007/978-3-319-46454-1_49 Europrean Conference on Computer Vision – ECCV 2016 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Springer MIT Web Domain
spellingShingle Borji, Ali
Bylinskii, Zoya
Recasens Continente, Adria
Oliva, Aude
Torralba, Antonio
Durand, Frederic
Where Should Saliency Models Look Next?
title Where Should Saliency Models Look Next?
title_full Where Should Saliency Models Look Next?
title_fullStr Where Should Saliency Models Look Next?
title_full_unstemmed Where Should Saliency Models Look Next?
title_short Where Should Saliency Models Look Next?
title_sort where should saliency models look next
url http://hdl.handle.net/1721.1/113344
https://orcid.org/0000-0003-0941-9863
https://orcid.org/0000-0003-4915-0256
https://orcid.org/0000-0001-9919-069X
work_keys_str_mv AT borjiali whereshouldsaliencymodelslooknext
AT bylinskiizoya whereshouldsaliencymodelslooknext
AT recasenscontinenteadria whereshouldsaliencymodelslooknext
AT olivaaude whereshouldsaliencymodelslooknext
AT torralbaantonio whereshouldsaliencymodelslooknext
AT durandfrederic whereshouldsaliencymodelslooknext