Iterative regularization for learning with convex loss functions

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieve...

Full description

Bibliographic Details
Main Authors: Lin, Junhong, Zhou, Ding-Xuan, Rosasco, Lorenzo
Other Authors: McGovern Institute for Brain Research at MIT
Format: Article
Published: JMLR, Inc. 2018
Online Access:http://hdl.handle.net/1721.1/116303
https://orcid.org/0000-0001-6376-4786
_version_ 1826208185272238080
author Lin, Junhong
Zhou, Ding-Xuan
Rosasco, Lorenzo
author2 McGovern Institute for Brain Research at MIT
author_facet McGovern Institute for Brain Research at MIT
Lin, Junhong
Zhou, Ding-Xuan
Rosasco, Lorenzo
author_sort Lin, Junhong
collection MIT
description We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove consistency and finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.
first_indexed 2024-09-23T14:01:51Z
format Article
id mit-1721.1/116303
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T14:01:51Z
publishDate 2018
publisher JMLR, Inc.
record_format dspace
spelling mit-1721.1/1163032022-09-28T17:51:18Z Iterative regularization for learning with convex loss functions Lin, Junhong Zhou, Ding-Xuan Rosasco, Lorenzo McGovern Institute for Brain Research at MIT Rosasco, Lorenzo We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove consistency and finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning. Italian Ministry of Education, Universities and Research (RBFR12M3AC) National Science Foundation (U.S.) (McGovern Institute for Brain Research at MIT. Center for Brains, Minds, and Machines. STC Award CCF-1231216) Research Grants Council (Hong Kong, China) (Project CityU 104012) National Natural Science Foundation (China) (Grant 11461161006) 2018-06-14T13:35:21Z 2018-06-14T13:35:21Z 2016-05 2015-03 2018-02-23T15:43:40Z Article http://purl.org/eprint/type/JournalArticle 1532-4435 1533-7928 http://hdl.handle.net/1721.1/116303 Lin, Junhong, Lorenzo Rosasaco, and Ding-Xuan Zhou. "Iterative Regularization for Learning with Convex Loss Functions." Journal of Machine Learning Research 17, 2016, pp. 1-38. © 2016 Junhong Lin, Lorenzo Rosasco and Ding-Xuan Zhou https://orcid.org/0000-0001-6376-4786 http://www.jmlr.org/papers/volume17/15-115/15-115.pdf Journal of Machine Learning Research Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf JMLR, Inc. Journal of Machine Learning Research
spellingShingle Lin, Junhong
Zhou, Ding-Xuan
Rosasco, Lorenzo
Iterative regularization for learning with convex loss functions
title Iterative regularization for learning with convex loss functions
title_full Iterative regularization for learning with convex loss functions
title_fullStr Iterative regularization for learning with convex loss functions
title_full_unstemmed Iterative regularization for learning with convex loss functions
title_short Iterative regularization for learning with convex loss functions
title_sort iterative regularization for learning with convex loss functions
url http://hdl.handle.net/1721.1/116303
https://orcid.org/0000-0001-6376-4786
work_keys_str_mv AT linjunhong iterativeregularizationforlearningwithconvexlossfunctions
AT zhoudingxuan iterativeregularizationforlearningwithconvexlossfunctions
AT rosascolorenzo iterativeregularizationforlearningwithconvexlossfunctions