A fast correction approach to tensor robust principal component analysis

Tensor robust principal component analysis (TRPCA) is a useful approach for obtaining low-rank data corrupted by noise or outliers. However, existing TRPCA methods face certain challenges when it comes to estimating the tensor rank and the sparsity accurately. The commonly used tensor nuclear norm (...

Full description

Bibliographic Details
Main Authors: Zhang, Zhechen, Liu, Sanyang, Lin, Zhiping, Xue, Jize, Liu, Lixia
Other Authors: School of Electrical and Electronic Engineering
Format: Journal Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180347
_version_ 1824453529551503360
author Zhang, Zhechen
Liu, Sanyang
Lin, Zhiping
Xue, Jize
Liu, Lixia
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Zhang, Zhechen
Liu, Sanyang
Lin, Zhiping
Xue, Jize
Liu, Lixia
author_sort Zhang, Zhechen
collection NTU
description Tensor robust principal component analysis (TRPCA) is a useful approach for obtaining low-rank data corrupted by noise or outliers. However, existing TRPCA methods face certain challenges when it comes to estimating the tensor rank and the sparsity accurately. The commonly used tensor nuclear norm (TNN) may lead to sub-optimal solutions due to the gap between TNN and the tensor rank. Additionally, the ℓ1-norm is not an ideal estimation of the ℓ0-norm, and solving TNN minimization can be computationally intensive because of the tensor singular value thresholding (t-SVT) scheme. To address these issues, a method called fast correction TNN (FC-TNN) is proposed for TRPCA. In contrast to existing methods, FC-TNN introduces a correction term to bridge the gap between TNN and the tensor rank. Furthermore, a new correction term is employed for the ℓ1-norm to achieve the desired solution. To improve computational efficiency, the Chebyshev polynomial approximation (CPA) technique is presented for computing t-SVT without requiring tensor singular value decomposition (t-SVD). The CPA technique is incorporated into the alternating direction method of multipliers (ADMM) algorithm to solve the proposed model effectively. Theoretical analysis demonstrates that FC-TNN offers a lower error bound compared to TNN under certain conditions. Extensive experiments conducted on various tensor-based datasets illustrate that the proposed method outperforms several state-of-the-art methods.
first_indexed 2025-02-19T03:07:52Z
format Journal Article
id ntu-10356/180347
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:07:52Z
publishDate 2024
record_format dspace
spelling ntu-10356/1803472024-10-02T05:59:39Z A fast correction approach to tensor robust principal component analysis Zhang, Zhechen Liu, Sanyang Lin, Zhiping Xue, Jize Liu, Lixia School of Electrical and Electronic Engineering Engineering Tensor nuclear norm Tensor singular value decomposition Tensor robust principal component analysis (TRPCA) is a useful approach for obtaining low-rank data corrupted by noise or outliers. However, existing TRPCA methods face certain challenges when it comes to estimating the tensor rank and the sparsity accurately. The commonly used tensor nuclear norm (TNN) may lead to sub-optimal solutions due to the gap between TNN and the tensor rank. Additionally, the ℓ1-norm is not an ideal estimation of the ℓ0-norm, and solving TNN minimization can be computationally intensive because of the tensor singular value thresholding (t-SVT) scheme. To address these issues, a method called fast correction TNN (FC-TNN) is proposed for TRPCA. In contrast to existing methods, FC-TNN introduces a correction term to bridge the gap between TNN and the tensor rank. Furthermore, a new correction term is employed for the ℓ1-norm to achieve the desired solution. To improve computational efficiency, the Chebyshev polynomial approximation (CPA) technique is presented for computing t-SVT without requiring tensor singular value decomposition (t-SVD). The CPA technique is incorporated into the alternating direction method of multipliers (ADMM) algorithm to solve the proposed model effectively. Theoretical analysis demonstrates that FC-TNN offers a lower error bound compared to TNN under certain conditions. Extensive experiments conducted on various tensor-based datasets illustrate that the proposed method outperforms several state-of-the-art methods. This research has been supported by the National Natural Science Foundation of China (No.12271419) and the Natural Science Basic Research Program of Shaanxi (Program No.2023-JC-YB-056). 2024-10-02T05:59:39Z 2024-10-02T05:59:39Z 2024 Journal Article Zhang, Z., Liu, S., Lin, Z., Xue, J. & Liu, L. (2024). A fast correction approach to tensor robust principal component analysis. Applied Mathematical Modelling, 128, 195-219. https://dx.doi.org/10.1016/j.apm.2024.01.020 0307-904X https://hdl.handle.net/10356/180347 10.1016/j.apm.2024.01.020 2-s2.0-85184993043 128 195 219 en Applied Mathematical Modelling © 2024 Elsevier Inc. All rights reserved.
spellingShingle Engineering
Tensor nuclear norm
Tensor singular value decomposition
Zhang, Zhechen
Liu, Sanyang
Lin, Zhiping
Xue, Jize
Liu, Lixia
A fast correction approach to tensor robust principal component analysis
title A fast correction approach to tensor robust principal component analysis
title_full A fast correction approach to tensor robust principal component analysis
title_fullStr A fast correction approach to tensor robust principal component analysis
title_full_unstemmed A fast correction approach to tensor robust principal component analysis
title_short A fast correction approach to tensor robust principal component analysis
title_sort fast correction approach to tensor robust principal component analysis
topic Engineering
Tensor nuclear norm
Tensor singular value decomposition
url https://hdl.handle.net/10356/180347
work_keys_str_mv AT zhangzhechen afastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT liusanyang afastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT linzhiping afastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT xuejize afastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT liulixia afastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT zhangzhechen fastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT liusanyang fastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT linzhiping fastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT xuejize fastcorrectionapproachtotensorrobustprincipalcomponentanalysis
AT liulixia fastcorrectionapproachtotensorrobustprincipalcomponentanalysis