Iterative regularization for learning with convex loss functions
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieve...
Hlavní autoři: | Lin, Junhong, Zhou, Ding-Xuan, Rosasco, Lorenzo |
---|---|
Další autoři: | McGovern Institute for Brain Research at MIT |
Médium: | Článek |
Vydáno: |
JMLR, Inc.
2018
|
On-line přístup: | http://hdl.handle.net/1721.1/116303 https://orcid.org/0000-0001-6376-4786 |
Podobné jednotky
-
Iterative Regularization via Dual Diagonal Descent
Autor: Garrigos, Guillaume, a další
Vydáno: (2018) -
Iterative Projection Methods for Structured Sparsity Regularization
Autor: Rosasco, Lorenzo, a další
Vydáno: (2009) -
Modified Fejér sequences and applications
Autor: Lin, Junhong, a další
Vydáno: (2018) -
Regularized Jacobi iteration for decentralized convex quadratic optimization with separable constraints
Autor: Deori, L, a další
Vydáno: (2018) -
Convex learning of multiple tasks and their structure
Autor: Ciliberto, Carlo, a další
Vydáno: (2017)