Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?

The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → ∞), like humans seem able to do. We propose an approach...

Full description

Bibliographic Details
Other Authors: Anselmi, Fabio
Format: Working Paper
Language:en_US
Published: Center for Brains, Minds and Machines (CBMM), arXiv 2014
Subjects:
Online Access:http://hdl.handle.net/1721.1/90566
_version_ 1811070013818273792
author2 Anselmi, Fabio
author_facet Anselmi, Fabio
collection MIT
description The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → ∞), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a "good" representation for supervised learning, characterized by small sample complexity (n). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, I, in terms of empirical distributions of the dot-products between I and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition|and that this representation may be continuously learned in an unsupervised way during development and visual experience.
first_indexed 2024-09-23T08:20:53Z
format Working Paper
id mit-1721.1/90566
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T08:20:53Z
publishDate 2014
publisher Center for Brains, Minds and Machines (CBMM), arXiv
record_format dspace
spelling mit-1721.1/905662019-04-09T18:02:43Z Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning? Anselmi, Fabio Leibo, Joel Z. Rosasco, Lorenzo Mutch, Jim Tacchetti, Andrea Poggio, Tomaso invariance machine learning complexity The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → ∞), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a "good" representation for supervised learning, characterized by small sample complexity (n). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, I, in terms of empirical distributions of the dot-products between I and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition|and that this representation may be continuously learned in an unsupervised way during development and visual experience. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216. 2014-10-06T22:02:10Z 2014-10-06T22:02:10Z 2014-03-12 Working Paper http://hdl.handle.net/1721.1/90566 arXiv:1311.4158v5 en_US CBMM Memo;0001 Attribution-NonCommercial 3.0 United States http://creativecommons.org/licenses/by-nc/3.0/us/ application/pdf Center for Brains, Minds and Machines (CBMM), arXiv
spellingShingle invariance
machine learning
complexity
Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
title Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
title_full Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
title_fullStr Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
title_full_unstemmed Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
title_short Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
title_sort unsupervised learning of invariant representations with low sample complexity the magic of sensory cortex or a new framework for machine learning
topic invariance
machine learning
complexity
url http://hdl.handle.net/1721.1/90566