Unsupervised learning of invariant representations

The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach...

Full description

Bibliographic Details
Main Authors: Anselmi, Fabio, Leibo, Joel Z, Rosasco, Lorenzo, Mutch, James Vincent, Tacchetti, Andrea, Poggio, Tomaso A
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Format: Article
Language:en_US
Published: Elsevier 2018
Online Access:http://hdl.handle.net/1721.1/116137
https://orcid.org/0000-0002-0264-4761
https://orcid.org/0000-0002-3153-916X
https://orcid.org/0000-0001-6376-4786
https://orcid.org/0000-0001-6130-5631
https://orcid.org/0000-0001-9311-9171
https://orcid.org/0000-0002-3944-0455
_version_ 1826205374433198080
author Anselmi, Fabio
Leibo, Joel Z
Rosasco, Lorenzo
Mutch, James Vincent
Tacchetti, Andrea
Poggio, Tomaso A
author2 Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
author_facet Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Anselmi, Fabio
Leibo, Joel Z
Rosasco, Lorenzo
Mutch, James Vincent
Tacchetti, Andrea
Poggio, Tomaso A
author_sort Anselmi, Fabio
collection MIT
description The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a “good” representation for supervised learning, characterized by small sample complexity. We consider the case of visual object recognition, though the theory also applies to other domains like speech. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translation, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and selective signature can be computed for each image or image patch: the invariance can be exact in the case of group transformations and approximate under non-group transformations. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such signature. The theory offers novel unsupervised learning algorithms for “deep” architectures for image and speech recognition. We conjecture that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and selective for recognition—and show how this representation may be continuously learned in an unsupervised way during development and visual experience.
first_indexed 2024-09-23T13:11:59Z
format Article
id mit-1721.1/116137
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:11:59Z
publishDate 2018
publisher Elsevier
record_format dspace
spelling mit-1721.1/1161372022-10-01T13:43:26Z Unsupervised learning of invariant representations Anselmi, Fabio Leibo, Joel Z Rosasco, Lorenzo Mutch, James Vincent Tacchetti, Andrea Poggio, Tomaso A Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences McGovern Institute for Brain Research at MIT Anselmi, Fabio Leibo, Joel Z Rosasco, Lorenzo Mutch, James Vincent Tacchetti, Andrea Poggio, Tomaso A The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a “good” representation for supervised learning, characterized by small sample complexity. We consider the case of visual object recognition, though the theory also applies to other domains like speech. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translation, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and selective signature can be computed for each image or image patch: the invariance can be exact in the case of group transformations and approximate under non-group transformations. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such signature. The theory offers novel unsupervised learning algorithms for “deep” architectures for image and speech recognition. We conjecture that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and selective for recognition—and show how this representation may be continuously learned in an unsupervised way during development and visual experience. National Science Foundation (U.S.) (Award CCF - 1231216) 2018-06-06T14:01:56Z 2018-06-06T14:01:56Z 2015-06 2015-04 Article http://purl.org/eprint/type/JournalArticle 0304-3975 http://hdl.handle.net/1721.1/116137 Anselmi, Fabio, et al. “Unsupervised Learning of Invariant Representations.” Theoretical Computer Science, vol. 633, June 2016, pp. 112–21. https://orcid.org/0000-0002-0264-4761 https://orcid.org/0000-0002-3153-916X https://orcid.org/0000-0001-6376-4786 https://orcid.org/0000-0001-6130-5631 https://orcid.org/0000-0001-9311-9171 https://orcid.org/0000-0002-3944-0455 en_US http://dx.doi.org/10.1016/j.tcs.2015.06.048 Theoretical Computer Science Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Elsevier arXiv
spellingShingle Anselmi, Fabio
Leibo, Joel Z
Rosasco, Lorenzo
Mutch, James Vincent
Tacchetti, Andrea
Poggio, Tomaso A
Unsupervised learning of invariant representations
title Unsupervised learning of invariant representations
title_full Unsupervised learning of invariant representations
title_fullStr Unsupervised learning of invariant representations
title_full_unstemmed Unsupervised learning of invariant representations
title_short Unsupervised learning of invariant representations
title_sort unsupervised learning of invariant representations
url http://hdl.handle.net/1721.1/116137
https://orcid.org/0000-0002-0264-4761
https://orcid.org/0000-0002-3153-916X
https://orcid.org/0000-0001-6376-4786
https://orcid.org/0000-0001-6130-5631
https://orcid.org/0000-0001-9311-9171
https://orcid.org/0000-0002-3944-0455
work_keys_str_mv AT anselmifabio unsupervisedlearningofinvariantrepresentations
AT leibojoelz unsupervisedlearningofinvariantrepresentations
AT rosascolorenzo unsupervisedlearningofinvariantrepresentations
AT mutchjamesvincent unsupervisedlearningofinvariantrepresentations
AT tacchettiandrea unsupervisedlearningofinvariantrepresentations
AT poggiotomasoa unsupervisedlearningofinvariantrepresentations