A unified framework for consistency of regularized loss minimizers

We characterize a family of regularized loss minimization problems that satisfy three properties: scaled uniform convergence, super-norm regularization, and norm-loss monotonicity. We show several theoretical guarantees within this framework, including loss consistency, norm consistency, sparsistenc...

Full description

Bibliographic Details
Main Authors: Honorio, Jean, Jaakkola, Tommi S.
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery (ACM) 2015
Online Access:http://hdl.handle.net/1721.1/100447
https://orcid.org/0000-0003-0238-6384
https://orcid.org/0000-0002-2199-0379
_version_ 1811092489047638016
author Honorio, Jean
Jaakkola, Tommi S.
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Honorio, Jean
Jaakkola, Tommi S.
author_sort Honorio, Jean
collection MIT
description We characterize a family of regularized loss minimization problems that satisfy three properties: scaled uniform convergence, super-norm regularization, and norm-loss monotonicity. We show several theoretical guarantees within this framework, including loss consistency, norm consistency, sparsistency (i.e. support recovery) as well as sign consistency. A number of regularization problems can be shown to fall within our framework and we provide several examples. Our results can be seen as a concise summary of existing guarantees but we also extend them to new settings. Our formulation enables us to assume very little about the hypothesis class, data distribution, the loss, or the regularization. In particular, many of our results do not require a bounded hypothesis class, or identically distributed samples. Similarly, we do not assume boundedness, convexity or smoothness of the loss nor the regularizer. We only assume approximate optimality of the empirical minimizer. In terms of recovery, in contrast to existing results, our sparsistency and sign consistency results do not require knowledge of the sub-differential of the objective function.
first_indexed 2024-09-23T15:18:59Z
format Article
id mit-1721.1/100447
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T15:18:59Z
publishDate 2015
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/1004472022-10-02T02:08:13Z A unified framework for consistency of regularized loss minimizers Honorio, Jean Jaakkola, Tommi S. Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Honorio, Jean Jaakkola, Tommi S. We characterize a family of regularized loss minimization problems that satisfy three properties: scaled uniform convergence, super-norm regularization, and norm-loss monotonicity. We show several theoretical guarantees within this framework, including loss consistency, norm consistency, sparsistency (i.e. support recovery) as well as sign consistency. A number of regularization problems can be shown to fall within our framework and we provide several examples. Our results can be seen as a concise summary of existing guarantees but we also extend them to new settings. Our formulation enables us to assume very little about the hypothesis class, data distribution, the loss, or the regularization. In particular, many of our results do not require a bounded hypothesis class, or identically distributed samples. Similarly, we do not assume boundedness, convexity or smoothness of the loss nor the regularizer. We only assume approximate optimality of the empirical minimizer. In terms of recovery, in contrast to existing results, our sparsistency and sign consistency results do not require knowledge of the sub-differential of the objective function. 2015-12-21T14:09:25Z 2015-12-21T14:09:25Z 2014 Article http://purl.org/eprint/type/ConferencePaper 1938-7228 http://hdl.handle.net/1721.1/100447 Honorio, Jean, and Tommi Jaakkola. "A unified framework for consistency of regularized loss minimizers." Journal of Machine Learning Research: Workshop and Conference Proceedings, Proceedings of The 31st International Conference on Machine Learning, Volume 32 (2014), 136-144. https://orcid.org/0000-0003-0238-6384 https://orcid.org/0000-0002-2199-0379 en_US http://jmlr.org/proceedings/papers/v32/ Journal of Machine Learning Research: Workshop and Conference Proceedings Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Association for Computing Machinery (ACM) MIT web domain
spellingShingle Honorio, Jean
Jaakkola, Tommi S.
A unified framework for consistency of regularized loss minimizers
title A unified framework for consistency of regularized loss minimizers
title_full A unified framework for consistency of regularized loss minimizers
title_fullStr A unified framework for consistency of regularized loss minimizers
title_full_unstemmed A unified framework for consistency of regularized loss minimizers
title_short A unified framework for consistency of regularized loss minimizers
title_sort unified framework for consistency of regularized loss minimizers
url http://hdl.handle.net/1721.1/100447
https://orcid.org/0000-0003-0238-6384
https://orcid.org/0000-0002-2199-0379
work_keys_str_mv AT honoriojean aunifiedframeworkforconsistencyofregularizedlossminimizers
AT jaakkolatommis aunifiedframeworkforconsistencyofregularizedlossminimizers
AT honoriojean unifiedframeworkforconsistencyofregularizedlossminimizers
AT jaakkolatommis unifiedframeworkforconsistencyofregularizedlossminimizers