Generalization and Properties of the Neural Response

Hierarchical learning algorithms have enjoyed tremendous growth in recent years, with many new algorithms being proposed and applied to a wide range of applications. However, despite the apparent success of hierarchical algorithms in practice, the theory of hierarchical architectures remains at an e...

Full description

Bibliographic Details
Main Authors: Bouvrie, Jake, Poggio, Tomaso, Rosasco, Lorenzo, Smale, Steve, Wibisono, Andre
Other Authors: Tomaso Poggio
Published: 2010
Subjects:
Online Access:http://hdl.handle.net/1721.1/60024
_version_ 1826208589159596032
author Bouvrie, Jake
Poggio, Tomaso
Rosasco, Lorenzo
Smale, Steve
Wibisono, Andre
author2 Tomaso Poggio
author_facet Tomaso Poggio
Bouvrie, Jake
Poggio, Tomaso
Rosasco, Lorenzo
Smale, Steve
Wibisono, Andre
author_sort Bouvrie, Jake
collection MIT
description Hierarchical learning algorithms have enjoyed tremendous growth in recent years, with many new algorithms being proposed and applied to a wide range of applications. However, despite the apparent success of hierarchical algorithms in practice, the theory of hierarchical architectures remains at an early stage. In this paper we study the theoretical properties of hierarchical algorithms from a mathematical perspective. Our work is based on the framework of hierarchical architectures introduced by Smale et al. in the paper "Mathematics of the Neural Response", Foundations of Computational Mathematics, 2010. We propose a generalized definition of the neural response and derived kernel that allows us to integrate some of the existing hierarchical algorithms in practice into our framework. We then use this generalized definition to analyze the theoretical properties of hierarchical architectures. Our analysis focuses on three particular aspects of the hierarchy. First, we show that a wide class of architectures suffers from range compression; essentially, the derived kernel becomes increasingly saturated at each layer. Second, we show that the complexity of a linear architecture is constrained by the complexity of the first layer, and in some cases the architecture collapses into a single-layer linear computation. Finally, we characterize the discrimination and invariance properties of the derived kernel in the case when the input data are one-dimensional strings. We believe that these theoretical results will provide a useful foundation for guiding future developments within the theory of hierarchical algorithms.
first_indexed 2024-09-23T14:08:01Z
id mit-1721.1/60024
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T14:08:01Z
publishDate 2010
record_format dspace
spelling mit-1721.1/600242019-04-12T11:36:24Z Generalization and Properties of the Neural Response Bouvrie, Jake Poggio, Tomaso Rosasco, Lorenzo Smale, Steve Wibisono, Andre Tomaso Poggio Center for Biological and Computational Learning (CBCL) hierarchical learning kernel methods learning theory Hierarchical learning algorithms have enjoyed tremendous growth in recent years, with many new algorithms being proposed and applied to a wide range of applications. However, despite the apparent success of hierarchical algorithms in practice, the theory of hierarchical architectures remains at an early stage. In this paper we study the theoretical properties of hierarchical algorithms from a mathematical perspective. Our work is based on the framework of hierarchical architectures introduced by Smale et al. in the paper "Mathematics of the Neural Response", Foundations of Computational Mathematics, 2010. We propose a generalized definition of the neural response and derived kernel that allows us to integrate some of the existing hierarchical algorithms in practice into our framework. We then use this generalized definition to analyze the theoretical properties of hierarchical architectures. Our analysis focuses on three particular aspects of the hierarchy. First, we show that a wide class of architectures suffers from range compression; essentially, the derived kernel becomes increasingly saturated at each layer. Second, we show that the complexity of a linear architecture is constrained by the complexity of the first layer, and in some cases the architecture collapses into a single-layer linear computation. Finally, we characterize the discrimination and invariance properties of the derived kernel in the case when the input data are one-dimensional strings. We believe that these theoretical results will provide a useful foundation for guiding future developments within the theory of hierarchical algorithms. 2010-11-22T22:15:09Z 2010-11-22T22:15:09Z 2010-11-19 http://hdl.handle.net/1721.1/60024 MIT-CSAIL-TR-2010-051 CBCL-292 Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported http://creativecommons.org/licenses/by-nc-nd/3.0/ 59 p. application/pdf
spellingShingle hierarchical learning
kernel methods
learning theory
Bouvrie, Jake
Poggio, Tomaso
Rosasco, Lorenzo
Smale, Steve
Wibisono, Andre
Generalization and Properties of the Neural Response
title Generalization and Properties of the Neural Response
title_full Generalization and Properties of the Neural Response
title_fullStr Generalization and Properties of the Neural Response
title_full_unstemmed Generalization and Properties of the Neural Response
title_short Generalization and Properties of the Neural Response
title_sort generalization and properties of the neural response
topic hierarchical learning
kernel methods
learning theory
url http://hdl.handle.net/1721.1/60024
work_keys_str_mv AT bouvriejake generalizationandpropertiesoftheneuralresponse
AT poggiotomaso generalizationandpropertiesoftheneuralresponse
AT rosascolorenzo generalizationandpropertiesoftheneuralresponse
AT smalesteve generalizationandpropertiesoftheneuralresponse
AT wibisonoandre generalizationandpropertiesoftheneuralresponse