Scale and translation-invariance for novel objects in human vision
© 2020, The Author(s). Though the range of invariance in recognition of novel objects is a basic aspect of human vision, its characterization has remained surprisingly elusive. Here we report tolerance to scale and position changes in one-shot learning by measuring recognition accuracy of Korean let...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer Science and Business Media LLC
2021
|
Online Access: | https://hdl.handle.net/1721.1/136305 |
_version_ | 1811073418935664640 |
---|---|
author | Han, Yena Roig, Gemma Geiger, Gad Poggio, Tomaso |
author2 | Center for Brains, Minds, and Machines |
author_facet | Center for Brains, Minds, and Machines Han, Yena Roig, Gemma Geiger, Gad Poggio, Tomaso |
author_sort | Han, Yena |
collection | MIT |
description | © 2020, The Author(s). Though the range of invariance in recognition of novel objects is a basic aspect of human vision, its characterization has remained surprisingly elusive. Here we report tolerance to scale and position changes in one-shot learning by measuring recognition accuracy of Korean letters presented in a flash to non-Korean subjects who had no previous experience with Korean letters. We found that humans have significant scale-invariance after only a single exposure to a novel object. The range of translation-invariance is limited, depending on the size and position of presented objects. To understand the underlying brain computation associated with the invariance properties, we compared experimental data with computational modeling results. Our results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance, by encoding different scale channels as well as eccentricity-dependent representations captured by neurons’ receptive field sizes and sampling density that change with eccentricity. Our psychophysical experiments and related simulations strongly suggest that the human visual system uses a computational strategy that differs in some key aspects from current deep learning architectures, being more data efficient and relying more critically on eye-movements. |
first_indexed | 2024-09-23T09:32:46Z |
format | Article |
id | mit-1721.1/136305 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T09:32:46Z |
publishDate | 2021 |
publisher | Springer Science and Business Media LLC |
record_format | dspace |
spelling | mit-1721.1/1363052023-03-01T21:29:47Z Scale and translation-invariance for novel objects in human vision Han, Yena Roig, Gemma Geiger, Gad Poggio, Tomaso Center for Brains, Minds, and Machines © 2020, The Author(s). Though the range of invariance in recognition of novel objects is a basic aspect of human vision, its characterization has remained surprisingly elusive. Here we report tolerance to scale and position changes in one-shot learning by measuring recognition accuracy of Korean letters presented in a flash to non-Korean subjects who had no previous experience with Korean letters. We found that humans have significant scale-invariance after only a single exposure to a novel object. The range of translation-invariance is limited, depending on the size and position of presented objects. To understand the underlying brain computation associated with the invariance properties, we compared experimental data with computational modeling results. Our results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance, by encoding different scale channels as well as eccentricity-dependent representations captured by neurons’ receptive field sizes and sampling density that change with eccentricity. Our psychophysical experiments and related simulations strongly suggest that the human visual system uses a computational strategy that differs in some key aspects from current deep learning architectures, being more data efficient and relying more critically on eye-movements. 2021-10-27T20:34:48Z 2021-10-27T20:34:48Z 2020 2021-03-19T13:58:20Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/136305 en 10.1038/S41598-019-57261-6 Scientific Reports Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Springer Science and Business Media LLC Scientific Reports |
spellingShingle | Han, Yena Roig, Gemma Geiger, Gad Poggio, Tomaso Scale and translation-invariance for novel objects in human vision |
title | Scale and translation-invariance for novel objects in human vision |
title_full | Scale and translation-invariance for novel objects in human vision |
title_fullStr | Scale and translation-invariance for novel objects in human vision |
title_full_unstemmed | Scale and translation-invariance for novel objects in human vision |
title_short | Scale and translation-invariance for novel objects in human vision |
title_sort | scale and translation invariance for novel objects in human vision |
url | https://hdl.handle.net/1721.1/136305 |
work_keys_str_mv | AT hanyena scaleandtranslationinvariancefornovelobjectsinhumanvision AT roiggemma scaleandtranslationinvariancefornovelobjectsinhumanvision AT geigergad scaleandtranslationinvariancefornovelobjectsinhumanvision AT poggiotomaso scaleandtranslationinvariancefornovelobjectsinhumanvision |