Revisiting compressed sensing: exploiting the efficiency of simplex and sparsification methods

We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster c...

Full description

Bibliographic Details
Main Authors: Vanderbei, Robert, Lin, Kevin, Liu, Han, Wang, Lie
Other Authors: Massachusetts Institute of Technology. Department of Mathematics
Format: Article
Language:English
Published: Springer Berlin Heidelberg 2017
Online Access:http://hdl.handle.net/1721.1/107484
https://orcid.org/0000-0003-3582-8898
Description
Summary:We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as ℓ[subscript 1]_ℓ[subscript s] and Mirror Prox regardless of the sparsity level or problem size.