Итог: | <p>Nonlinear matrix recovery is an emerging paradigm in which specific classes of high-rank matrices can be recovered from an underdetermined linear system of measurements. In particular, we consider matrices whose columns, seen as data points, belong to an algebraic variety, namely, a set defined by finitely many polynomial equations. The nonlinear structure is exploited using an appropriately chosen feature map, which induces a low-rank structure in the feature space, thus allowing recovery of the original (high-rank) matrix. The same framework is applicable to matrices whose columns are divided in clusters, so that we can address clustering tasks with missing entries.</p>
<p>We formulate the rank minimization of the feature matrix as the minimization of a smooth cost function on a Riemannian manifold, which then allows us to employ the theoretically rigorous Riemannian optimization framework, that calls on differential geometry tools to construct feasible iterative algorithms on specific Riemannian manifolds. In particular, we show the potential benefits of using second-order algorithmic variants --- instead of first-order ones --- as they are able to achieve recovery of the original data with high accuracy. We additionally develop an alternating minimization method to solve the recovery problem, for which we give convergence and complexity guarantees.</p>
<p>Considering algebraic varieties also leads us to propose a new approach for the denoising of point clouds that approximately fit such structures, as well as for their registration, namely, the task of estimating a transformation which overlaps two point clouds representing the same shape in different coordinate systems.</p>
<p>Finally, we consider general optimization problems with smooth equality constraints. Due to the challenge of achieving feasibility for generic nonlinear equality constraints, we must depart from the Riemannian framework and we propose an infeasible algorithm which minimizes a smooth penalty function, known as Fletcher's augmented Lagrangian. We provide optimal worst-case bounds on the number of iterations that are required to find an approximate first- and second-order critical point.</p>
|