Iterative regularization for learning with convex loss functions
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieve...
Main Authors: | Lin, Junhong, Zhou, Ding-Xuan, Rosasco, Lorenzo |
---|---|
Other Authors: | McGovern Institute for Brain Research at MIT |
Format: | Article |
Published: |
JMLR, Inc.
2018
|
Online Access: | http://hdl.handle.net/1721.1/116303 https://orcid.org/0000-0001-6376-4786 |
Similar Items
-
Iterative Regularization via Dual Diagonal Descent
by: Garrigos, Guillaume, et al.
Published: (2018) -
Iterative Projection Methods for Structured Sparsity Regularization
by: Rosasco, Lorenzo, et al.
Published: (2009) -
Modified Fejér sequences and applications
by: Lin, Junhong, et al.
Published: (2018) -
Convex learning of multiple tasks and their structure
by: Ciliberto, Carlo, et al.
Published: (2017) -
Elastic-Net Regularization in Learning Theory
by: De Mol, Christine, et al.
Published: (2008)