Scaling law for recovering the sparsest element in a subspace

We address the problem of recovering a sparse n-vector within a given subspace. This problem is a subtask of some approaches to dictionary learning and sparse principal component analysis. Hence, if we can prove scaling laws for recovery of sparse vectors, it will be easier to derive and prove recov...

Full description

Bibliographic Details
Main Authors: Demanet, Laurent, Hand, Paul
Other Authors: Massachusetts Institute of Technology. Department of Mathematics
Format: Article
Published: Oxford University Press (OUP) 2018
Online Access:http://hdl.handle.net/1721.1/115483
https://orcid.org/0000-0001-7052-5097
_version_ 1811079600953884672
author Demanet, Laurent
Hand, Paul
author2 Massachusetts Institute of Technology. Department of Mathematics
author_facet Massachusetts Institute of Technology. Department of Mathematics
Demanet, Laurent
Hand, Paul
author_sort Demanet, Laurent
collection MIT
description We address the problem of recovering a sparse n-vector within a given subspace. This problem is a subtask of some approaches to dictionary learning and sparse principal component analysis. Hence, if we can prove scaling laws for recovery of sparse vectors, it will be easier to derive and prove recovery results in these applications. In this paper, we present a scaling law for recovering the sparse vector from a subspace that is spanned by the sparse vector and k random vectors. We prove that the sparse vector will be the output to one of n linear programs with high probability if its support size s satisfies s≲n√/klogn. The scaling law still holds when the desired vector is approximately sparse. To get a single estimate for the sparse vector from the n linear programs, we must select which output is the sparsest. This selection process can be based on any proxy for sparsity, and the specific proxy has the potential to improve or worsen the scaling law. If sparsity is interpreted in an ℓ1/ℓ∞ sense, then the scaling law cannot be better than s≲n/√k. Computer simulations show that selecting the sparsest output in the ℓ1/ℓ2 or thresholded-ℓ0 senses can lead to a larger parameter range for successful recovery than that given by the ℓ1/ℓ∞ sense.
first_indexed 2024-09-23T11:17:36Z
format Article
id mit-1721.1/115483
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T11:17:36Z
publishDate 2018
publisher Oxford University Press (OUP)
record_format dspace
spelling mit-1721.1/1154832022-09-27T18:30:22Z Scaling law for recovering the sparsest element in a subspace Demanet, Laurent Hand, Paul Massachusetts Institute of Technology. Department of Mathematics Demanet, Laurent Hand, Paul We address the problem of recovering a sparse n-vector within a given subspace. This problem is a subtask of some approaches to dictionary learning and sparse principal component analysis. Hence, if we can prove scaling laws for recovery of sparse vectors, it will be easier to derive and prove recovery results in these applications. In this paper, we present a scaling law for recovering the sparse vector from a subspace that is spanned by the sparse vector and k random vectors. We prove that the sparse vector will be the output to one of n linear programs with high probability if its support size s satisfies s≲n√/klogn. The scaling law still holds when the desired vector is approximately sparse. To get a single estimate for the sparse vector from the n linear programs, we must select which output is the sparsest. This selection process can be based on any proxy for sparsity, and the specific proxy has the potential to improve or worsen the scaling law. If sparsity is interpreted in an ℓ1/ℓ∞ sense, then the scaling law cannot be better than s≲n/√k. Computer simulations show that selecting the sparsest output in the ℓ1/ℓ2 or thresholded-ℓ0 senses can lead to a larger parameter range for successful recovery than that given by the ℓ1/ℓ∞ sense. 2018-05-17T19:30:44Z 2018-05-17T19:30:44Z 2014-07 2014-05 2018-05-17T17:48:06Z Article http://purl.org/eprint/type/JournalArticle 2049-8764 2049-8772 http://hdl.handle.net/1721.1/115483 Demanet, L., and P. Hand. “Scaling Law for Recovering the Sparsest Element in a Subspace.” Information and Inference 3, 4 (July 2014): 295–309 © 2014 The Authors https://orcid.org/0000-0001-7052-5097 http://dx.doi.org/10.1093/IMAIAI/IAU007 Information and Inference Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Oxford University Press (OUP) arXiv
spellingShingle Demanet, Laurent
Hand, Paul
Scaling law for recovering the sparsest element in a subspace
title Scaling law for recovering the sparsest element in a subspace
title_full Scaling law for recovering the sparsest element in a subspace
title_fullStr Scaling law for recovering the sparsest element in a subspace
title_full_unstemmed Scaling law for recovering the sparsest element in a subspace
title_short Scaling law for recovering the sparsest element in a subspace
title_sort scaling law for recovering the sparsest element in a subspace
url http://hdl.handle.net/1721.1/115483
https://orcid.org/0000-0001-7052-5097
work_keys_str_mv AT demanetlaurent scalinglawforrecoveringthesparsestelementinasubspace
AT handpaul scalinglawforrecoveringthesparsestelementinasubspace