Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD

We propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format, also called the tensor train matrix format. Our tensor network randomized SVD (TNrSVD) algorithm is an MPO implementation of the rand...

Full description

Bibliographic Details
Main Authors: Batselier, Kim, Yu, Wenjian, Daniel, Luca, Wong, Ngai
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: Society for Industrial & Applied Mathematics (SIAM) 2019
Online Access:https://hdl.handle.net/1721.1/121552
_version_ 1826210514000150528
author Batselier, Kim
Yu, Wenjian
Daniel, Luca
Wong, Ngai
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Batselier, Kim
Yu, Wenjian
Daniel, Luca
Wong, Ngai
author_sort Batselier, Kim
collection MIT
description We propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format, also called the tensor train matrix format. Our tensor network randomized SVD (TNrSVD) algorithm is an MPO implementation of the randomized SVD algorithm that is able to compute dominant singular values and their corresponding singular vectors. In contrast to the state-of-the-art tensor-based alternating least squares SVD (ALS-SVD) and modified alternating least squares SVD (MALS-SVD) matrix approximation methods, TNrSVD can be up to 13 times faster while achieving better accuracy. In addition, our TNrSVD algorithm also produces accurate approximations in particular cases where both ALS-SVD and MALS-SVD fail to converge. We also propose a new algorithm for the fast conversion of a sparse matrix into its corresponding MPO form, which is up to 509 times faster than the standard tensor train SVD method while achieving machine precision accuracy. The efficiency and accuracy of both algorithms are demonstrated in numerical experiments. Key words: curse of dimensionality, low-rank tensor approximation, matrix factorization, matrix product operator, singular value decompositon (SVD), tensor network, tensor train (TT) decomposition, randomized algorithm
first_indexed 2024-09-23T14:50:54Z
format Article
id mit-1721.1/121552
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T14:50:54Z
publishDate 2019
publisher Society for Industrial & Applied Mathematics (SIAM)
record_format dspace
spelling mit-1721.1/1215522022-10-01T22:55:24Z Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD Batselier, Kim Yu, Wenjian Daniel, Luca Wong, Ngai Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format, also called the tensor train matrix format. Our tensor network randomized SVD (TNrSVD) algorithm is an MPO implementation of the randomized SVD algorithm that is able to compute dominant singular values and their corresponding singular vectors. In contrast to the state-of-the-art tensor-based alternating least squares SVD (ALS-SVD) and modified alternating least squares SVD (MALS-SVD) matrix approximation methods, TNrSVD can be up to 13 times faster while achieving better accuracy. In addition, our TNrSVD algorithm also produces accurate approximations in particular cases where both ALS-SVD and MALS-SVD fail to converge. We also propose a new algorithm for the fast conversion of a sparse matrix into its corresponding MPO form, which is up to 509 times faster than the standard tensor train SVD method while achieving machine precision accuracy. The efficiency and accuracy of both algorithms are demonstrated in numerical experiments. Key words: curse of dimensionality, low-rank tensor approximation, matrix factorization, matrix product operator, singular value decompositon (SVD), tensor network, tensor train (TT) decomposition, randomized algorithm Research Grants Council (Hong Kong, China) (17246416) 2019-07-09T19:10:20Z 2019-07-09T19:10:20Z 2018-01 2017-07-25 2019-05-15T17:10:18Z Article http://purl.org/eprint/type/JournalArticle 0895-4798 1095-7162 https://hdl.handle.net/1721.1/121552 Batselier, Kim, et al. “Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD.” SIAM Journal on Matrix Analysis and Applications 39, no. 3 (January 2018): 1221–44. © 2018 Society for Industrial and Applied Mathematics. en 10.1137/17m1140480 SIAM Journal on Matrix Analysis and Applications Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Society for Industrial & Applied Mathematics (SIAM) SIAM
spellingShingle Batselier, Kim
Yu, Wenjian
Daniel, Luca
Wong, Ngai
Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
title Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
title_full Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
title_fullStr Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
title_full_unstemmed Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
title_short Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
title_sort computing low rank approximations of large scale matrices with the tensor network randomized svd
url https://hdl.handle.net/1721.1/121552
work_keys_str_mv AT batselierkim computinglowrankapproximationsoflargescalematriceswiththetensornetworkrandomizedsvd
AT yuwenjian computinglowrankapproximationsoflargescalematriceswiththetensornetworkrandomizedsvd
AT danielluca computinglowrankapproximationsoflargescalematriceswiththetensornetworkrandomizedsvd
AT wongngai computinglowrankapproximationsoflargescalematriceswiththetensornetworkrandomizedsvd