Notes on Regularized Least Squares

This is a collection of information about regularized least squares (RLS). The facts here are not “new results”, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things “for free”: if we can solve a single supervise...

Full description

Bibliographic Details
Main Authors: Rifkin, Ryan M., Lippert, Ross A.
Other Authors: Tomaso Poggio
Published: 2007
Subjects:
Online Access:http://hdl.handle.net/1721.1/37318
_version_ 1811085736675377152
author Rifkin, Ryan M.
Lippert, Ross A.
author2 Tomaso Poggio
author_facet Tomaso Poggio
Rifkin, Ryan M.
Lippert, Ross A.
author_sort Rifkin, Ryan M.
collection MIT
description This is a collection of information about regularized least squares (RLS). The facts here are not “new results”, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things “for free”: if we can solve a single supervised RLS problem, we can search for a good regularization parameter lambda at essentially no additional cost.The discussion in this paper applies to “dense” regularized least squares, where we work with matrix factorizations of the data or kernel matrix. It is also possible to work with iterative methods such as conjugate gradient, and this is frequently the method of choice for large data sets in high dimensions with very few nonzero dimensions per point, such as text classifciation tasks. The results discussed here do not apply to iterative methods, which have different design tradeoffs.We present the results in greater detail than strictly necessary, erring on the side of showing our work. We hope that this will be useful to people trying to learn more about linear algebra manipulations in the machine learning context.
first_indexed 2024-09-23T13:14:52Z
id mit-1721.1/37318
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T13:14:52Z
publishDate 2007
record_format dspace
spelling mit-1721.1/373182019-04-12T07:40:32Z Notes on Regularized Least Squares Rifkin, Ryan M. Lippert, Ross A. Tomaso Poggio Center for Biological and Computational Learning (CBCL) machine learning, linear algebra This is a collection of information about regularized least squares (RLS). The facts here are not “new results”, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things “for free”: if we can solve a single supervised RLS problem, we can search for a good regularization parameter lambda at essentially no additional cost.The discussion in this paper applies to “dense” regularized least squares, where we work with matrix factorizations of the data or kernel matrix. It is also possible to work with iterative methods such as conjugate gradient, and this is frequently the method of choice for large data sets in high dimensions with very few nonzero dimensions per point, such as text classifciation tasks. The results discussed here do not apply to iterative methods, which have different design tradeoffs.We present the results in greater detail than strictly necessary, erring on the side of showing our work. We hope that this will be useful to people trying to learn more about linear algebra manipulations in the machine learning context. 2007-05-01T16:01:50Z 2007-05-01T16:01:50Z 2007-05-01 MIT-CSAIL-TR-2007-025 CBCL-268 http://hdl.handle.net/1721.1/37318 Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory 8 p. application/pdf application/postscript
spellingShingle machine learning, linear algebra
Rifkin, Ryan M.
Lippert, Ross A.
Notes on Regularized Least Squares
title Notes on Regularized Least Squares
title_full Notes on Regularized Least Squares
title_fullStr Notes on Regularized Least Squares
title_full_unstemmed Notes on Regularized Least Squares
title_short Notes on Regularized Least Squares
title_sort notes on regularized least squares
topic machine learning, linear algebra
url http://hdl.handle.net/1721.1/37318
work_keys_str_mv AT rifkinryanm notesonregularizedleastsquares
AT lippertrossa notesonregularizedleastsquares