On low-rank plus sparse matrix sensing

<p>Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data, and is the foundation of robust principal component analysis (Candès et al., 2011). This thesis is concerned with <em>low-rank plus sparse matrix s...

Full description

Bibliographic Details
Main Author: Vary, S
Other Authors: Tanner, J
Format: Thesis
Language:English
Published: 2021
Subjects:
_version_ 1797076549373001728
author Vary, S
author2 Tanner, J
author_facet Tanner, J
Vary, S
author_sort Vary, S
collection OXFORD
description <p>Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data, and is the foundation of robust principal component analysis (Candès et al., 2011). This thesis is concerned with <em>low-rank plus sparse matrix sensing</em> - the problem of recovering a matrix that is formed as the sum of a low-rank and a sparse matrix, and the two components, from a number of measurements far smaller than the dimensionality of the matrix.</p> <p>It is well-known, that inverse problems over low-rank matrices, such as robust principal component analysis and matrix completion, require a low coherence between the low-rank matrix and the canonical basis. However, in this thesis, we demonstrate that the well-posedness issue is even more fundamental; in some cases, both robust principal component analysis and matrix completion can fail to have any solutions due to the fact that the set of low-rank plus sparse matrices is not closed. As a consequence, the lower restricted isometry constants (RICs) cannot be upper bounded for some low-rank plus sparse matrices unless further restrictions are imposed on the constituents. We close the set of low-rank plus sparse matrices by posing an additional bound on the Frobenius norm of the low-rank component, and ensure the optimisation is well-posed and that the RICs can be bounded. We show that constraining the incoherence of the low-rank component also closes the set provided μ < √mn/ (r √s) and satisfies a certain additivity property necessary for the analysis of recovery algorithms.</p> <p>Compressed sensing, matrix completion, and their variants have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension (Foucart and Rauhut, 2013). This thesis develops similar guarantees showing that m × n matrices that can be expressed as the sum of a rank-r matrix and a s-sparse matrix can be recovered by computationally tractable methods from 𝒪(r(m+n-r)+s)log(mn/s) linear measurements. More specifically, we establish that the RICs for the aforementioned matrices remain bounded independent of problem size provided p/mn, s/p, and r(m+n-r)/p remain fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operator's RICs are sufficiently small. The convex relaxation and NAHT also provably solve Robust PCA with the optimal order of the number of corruptions s = 𝒪(mn/(μ<sup>2</sup> r<sup>2</sup>)). Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.</p>
first_indexed 2024-03-07T00:05:18Z
format Thesis
id oxford-uuid:7757c4ea-a988-4f20-a266-b335d89aa1a8
institution University of Oxford
language English
last_indexed 2024-03-07T00:05:18Z
publishDate 2021
record_format dspace
spelling oxford-uuid:7757c4ea-a988-4f20-a266-b335d89aa1a82022-03-26T20:23:24ZOn low-rank plus sparse matrix sensingThesishttp://purl.org/coar/resource_type/c_db06uuid:7757c4ea-a988-4f20-a266-b335d89aa1a8MathematicsStatisticsNumerical analysisEnglishHyrax Deposit2021Vary, STanner, JThompson, A<p>Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data, and is the foundation of robust principal component analysis (Candès et al., 2011). This thesis is concerned with <em>low-rank plus sparse matrix sensing</em> - the problem of recovering a matrix that is formed as the sum of a low-rank and a sparse matrix, and the two components, from a number of measurements far smaller than the dimensionality of the matrix.</p> <p>It is well-known, that inverse problems over low-rank matrices, such as robust principal component analysis and matrix completion, require a low coherence between the low-rank matrix and the canonical basis. However, in this thesis, we demonstrate that the well-posedness issue is even more fundamental; in some cases, both robust principal component analysis and matrix completion can fail to have any solutions due to the fact that the set of low-rank plus sparse matrices is not closed. As a consequence, the lower restricted isometry constants (RICs) cannot be upper bounded for some low-rank plus sparse matrices unless further restrictions are imposed on the constituents. We close the set of low-rank plus sparse matrices by posing an additional bound on the Frobenius norm of the low-rank component, and ensure the optimisation is well-posed and that the RICs can be bounded. We show that constraining the incoherence of the low-rank component also closes the set provided μ < √mn/ (r √s) and satisfies a certain additivity property necessary for the analysis of recovery algorithms.</p> <p>Compressed sensing, matrix completion, and their variants have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension (Foucart and Rauhut, 2013). This thesis develops similar guarantees showing that m × n matrices that can be expressed as the sum of a rank-r matrix and a s-sparse matrix can be recovered by computationally tractable methods from 𝒪(r(m+n-r)+s)log(mn/s) linear measurements. More specifically, we establish that the RICs for the aforementioned matrices remain bounded independent of problem size provided p/mn, s/p, and r(m+n-r)/p remain fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operator's RICs are sufficiently small. The convex relaxation and NAHT also provably solve Robust PCA with the optimal order of the number of corruptions s = 𝒪(mn/(μ<sup>2</sup> r<sup>2</sup>)). Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.</p>
spellingShingle Mathematics
Statistics
Numerical analysis
Vary, S
On low-rank plus sparse matrix sensing
title On low-rank plus sparse matrix sensing
title_full On low-rank plus sparse matrix sensing
title_fullStr On low-rank plus sparse matrix sensing
title_full_unstemmed On low-rank plus sparse matrix sensing
title_short On low-rank plus sparse matrix sensing
title_sort on low rank plus sparse matrix sensing
topic Mathematics
Statistics
Numerical analysis
work_keys_str_mv AT varys onlowrankplussparsematrixsensing