A new perspective on boosting in linear regression via subgradient optimization and relatives

We analyze boosting algorithms [Ann. Statist. 29 (2001) 1189–1232; Ann. Statist. 28 (2000) 337–407; Ann. Statist. 32 (2004) 407–499] in linear regression from a new perspective: that of modern first-order methods in convex optimiz ation. We show that classic boosting algorithms in linear regression,...

Full description

Bibliographic Details
Main Authors: M. Freund, Robert, Grigas, Paul, Mazumder, Rahul, Freund, Robert Michael, Grigas, Paul Edward
Other Authors: Massachusetts Institute of Technology. Operations Research Center
Format: Article
Published: Institute of Mathematical Statistics 2018
Online Access:http://hdl.handle.net/1721.1/115300
https://orcid.org/0000-0002-1733-5363
https://orcid.org/0000-0002-5617-1058
_version_ 1811088687619899392
author M. Freund, Robert
Grigas, Paul
Mazumder, Rahul
Freund, Robert Michael
Grigas, Paul Edward
author2 Massachusetts Institute of Technology. Operations Research Center
author_facet Massachusetts Institute of Technology. Operations Research Center
M. Freund, Robert
Grigas, Paul
Mazumder, Rahul
Freund, Robert Michael
Grigas, Paul Edward
author_sort M. Freund, Robert
collection MIT
description We analyze boosting algorithms [Ann. Statist. 29 (2001) 1189–1232; Ann. Statist. 28 (2000) 337–407; Ann. Statist. 32 (2004) 407–499] in linear regression from a new perspective: that of modern first-order methods in convex optimiz ation. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS ? ) and least squares boosting [LS-BOOST(?)], can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a minor modification of FS ? that yields an algorithm for the LASSO, and that may be easily extended to an algorithm that computes the LASSO path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the LASSO may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-BOOST(?) and FS ? ) by using techniques of first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular, they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.
first_indexed 2024-09-23T14:05:57Z
format Article
id mit-1721.1/115300
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T14:05:57Z
publishDate 2018
publisher Institute of Mathematical Statistics
record_format dspace
spelling mit-1721.1/1153002022-10-01T19:11:59Z A new perspective on boosting in linear regression via subgradient optimization and relatives M. Freund, Robert Grigas, Paul Mazumder, Rahul Freund, Robert Michael Grigas, Paul Edward Massachusetts Institute of Technology. Operations Research Center Sloan School of Management Freund, Robert Michael Grigas, Paul Edward We analyze boosting algorithms [Ann. Statist. 29 (2001) 1189–1232; Ann. Statist. 28 (2000) 337–407; Ann. Statist. 32 (2004) 407–499] in linear regression from a new perspective: that of modern first-order methods in convex optimiz ation. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS ? ) and least squares boosting [LS-BOOST(?)], can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a minor modification of FS ? that yields an algorithm for the LASSO, and that may be easily extended to an algorithm that computes the LASSO path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the LASSO may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-BOOST(?) and FS ? ) by using techniques of first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular, they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset. 2018-05-10T18:57:26Z 2018-05-10T18:57:26Z 2017-12 2016-08 2018-05-01T18:07:10Z Article http://purl.org/eprint/type/JournalArticle 0090-5364 http://hdl.handle.net/1721.1/115300 M. Freund, Robert et al. “A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives.” The Annals of Statistics 45, 6 (December 2017): 2328–2364 © 2017 Institute of Mathematical Statistics https://orcid.org/0000-0002-1733-5363 https://orcid.org/0000-0002-5617-1058 http://dx.doi.org/10.1214/16-AOS1505 Annals of Statistics Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Mathematical Statistics arXiv
spellingShingle M. Freund, Robert
Grigas, Paul
Mazumder, Rahul
Freund, Robert Michael
Grigas, Paul Edward
A new perspective on boosting in linear regression via subgradient optimization and relatives
title A new perspective on boosting in linear regression via subgradient optimization and relatives
title_full A new perspective on boosting in linear regression via subgradient optimization and relatives
title_fullStr A new perspective on boosting in linear regression via subgradient optimization and relatives
title_full_unstemmed A new perspective on boosting in linear regression via subgradient optimization and relatives
title_short A new perspective on boosting in linear regression via subgradient optimization and relatives
title_sort new perspective on boosting in linear regression via subgradient optimization and relatives
url http://hdl.handle.net/1721.1/115300
https://orcid.org/0000-0002-1733-5363
https://orcid.org/0000-0002-5617-1058
work_keys_str_mv AT mfreundrobert anewperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT grigaspaul anewperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT mazumderrahul anewperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT freundrobertmichael anewperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT grigaspauledward anewperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT mfreundrobert newperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT grigaspaul newperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT mazumderrahul newperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT freundrobertmichael newperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives
AT grigaspauledward newperspectiveonboostinginlinearregressionviasubgradientoptimizationandrelatives