Optimal Rates for Regularization Operators in Learning Theory
We develop some new error bounds for learning algorithms induced by regularization methods in the regression setting. The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second conne...
Main Author: | |
---|---|
Other Authors: | |
Language: | en_US |
Published: |
2006
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/34216 |
_version_ | 1811087309741752320 |
---|---|
author | Caponnetto, Andrea |
author2 | Tomaso Poggio |
author_facet | Tomaso Poggio Caponnetto, Andrea |
author_sort | Caponnetto, Andrea |
collection | MIT |
description | We develop some new error bounds for learning algorithms induced by regularization methods in the regression setting. The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second connected to the effective dimension of the marginal probability measure over the input space. We show, extending previous results, that by a suitable choice of the regularization parameter as a function of the number of the available examples, it is possible attain the optimal minimax rates of convergence for the expected squared loss of the estimators, over the family of priors fulfilling the constraint r + s > 1/2. The setting considers both labelled and unlabelled examples, the latter being crucial for the optimality results on the priors in the range r < 1/2. |
first_indexed | 2024-09-23T13:44:02Z |
id | mit-1721.1/34216 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T13:44:02Z |
publishDate | 2006 |
record_format | dspace |
spelling | mit-1721.1/342162019-04-12T08:37:58Z Optimal Rates for Regularization Operators in Learning Theory Caponnetto, Andrea Tomaso Poggio Center for Biological and Computational Learning (CBCL) optimal rates, regularized least-squares algorithm, regularization methods, adaptation We develop some new error bounds for learning algorithms induced by regularization methods in the regression setting. The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second connected to the effective dimension of the marginal probability measure over the input space. We show, extending previous results, that by a suitable choice of the regularization parameter as a function of the number of the available examples, it is possible attain the optimal minimax rates of convergence for the expected squared loss of the estimators, over the family of priors fulfilling the constraint r + s > 1/2. The setting considers both labelled and unlabelled examples, the latter being crucial for the optimality results on the priors in the range r < 1/2. 2006-09-29T18:36:42Z 2006-09-29T18:36:42Z 2006-09-10 MIT-CSAIL-TR-2006-062 CBCL-264 http://hdl.handle.net/1721.1/34216 en_US Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory 16 p. 776374 bytes 738421 bytes application/postscript application/pdf application/postscript application/pdf |
spellingShingle | optimal rates, regularized least-squares algorithm, regularization methods, adaptation Caponnetto, Andrea Optimal Rates for Regularization Operators in Learning Theory |
title | Optimal Rates for Regularization Operators in Learning Theory |
title_full | Optimal Rates for Regularization Operators in Learning Theory |
title_fullStr | Optimal Rates for Regularization Operators in Learning Theory |
title_full_unstemmed | Optimal Rates for Regularization Operators in Learning Theory |
title_short | Optimal Rates for Regularization Operators in Learning Theory |
title_sort | optimal rates for regularization operators in learning theory |
topic | optimal rates, regularized least-squares algorithm, regularization methods, adaptation |
url | http://hdl.handle.net/1721.1/34216 |
work_keys_str_mv | AT caponnettoandrea optimalratesforregularizationoperatorsinlearningtheory |