Some Properties of Empirical Risk Minimization over Donsker Classes

We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a...

Full description

Bibliographic Details
Main Authors: Caponnetto, Andrea, Rakhlin, Alexander
Language:en_US
Published: 2005
Subjects:
Online Access:http://hdl.handle.net/1721.1/30545
_version_ 1826212217965510656
author Caponnetto, Andrea
Rakhlin, Alexander
author_facet Caponnetto, Andrea
Rakhlin, Alexander
author_sort Caponnetto, Andrea
collection MIT
description We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}.
first_indexed 2024-09-23T15:18:13Z
id mit-1721.1/30545
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T15:18:13Z
publishDate 2005
record_format dspace
spelling mit-1721.1/305452019-04-11T04:49:15Z Some Properties of Empirical Risk Minimization over Donsker Classes Caponnetto, Andrea Rakhlin, Alexander AI empirical risk minimization stability empirical processes We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}. 2005-12-22T02:29:32Z 2005-12-22T02:29:32Z 2005-05-17 MIT-CSAIL-TR-2005-033 AIM-2005-018 CBCL-250 http://hdl.handle.net/1721.1/30545 en_US Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory 9 p. 7033622 bytes 434782 bytes application/postscript application/pdf application/postscript application/pdf
spellingShingle AI
empirical risk minimization
stability
empirical processes
Caponnetto, Andrea
Rakhlin, Alexander
Some Properties of Empirical Risk Minimization over Donsker Classes
title Some Properties of Empirical Risk Minimization over Donsker Classes
title_full Some Properties of Empirical Risk Minimization over Donsker Classes
title_fullStr Some Properties of Empirical Risk Minimization over Donsker Classes
title_full_unstemmed Some Properties of Empirical Risk Minimization over Donsker Classes
title_short Some Properties of Empirical Risk Minimization over Donsker Classes
title_sort some properties of empirical risk minimization over donsker classes
topic AI
empirical risk minimization
stability
empirical processes
url http://hdl.handle.net/1721.1/30545
work_keys_str_mv AT caponnettoandrea somepropertiesofempiricalriskminimizationoverdonskerclasses
AT rakhlinalexander somepropertiesofempiricalriskminimizationoverdonskerclasses