Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms

We propose the Weak Rescaled Pure Super Greedy Algorithm (WRPSGA) for approximation with respect to a dictionary <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="script">D</mi><...

Full description

Bibliographic Details
Main Authors: Wenhui Zhang, Peixin Ye, Shuo Xing, Xu Xu
Format: Article
Language:English
Published: MDPI AG 2022-08-01
Series:Axioms
Subjects:
Online Access:https://www.mdpi.com/2075-1680/11/9/437
_version_ 1797491187344474112
author Wenhui Zhang
Peixin Ye
Shuo Xing
Xu Xu
author_facet Wenhui Zhang
Peixin Ye
Shuo Xing
Xu Xu
author_sort Wenhui Zhang
collection DOAJ
description We propose the Weak Rescaled Pure Super Greedy Algorithm (WRPSGA) for approximation with respect to a dictionary <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="script">D</mi></semantics></math></inline-formula> in Hilbert space. The WRPSGA is simpler than some popular greedy algorithms. We show that the convergence rate of the RPSGA on the closure of the convex hull of the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula>-coherent dictionary <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="script">D</mi></semantics></math></inline-formula> is optimal. Then, we design the Rescaled Pure Super Greedy Learning Algorithm (RPSGLA) for kernel-based supervised learning. We prove that the convergence rate of the RPSGLA can be arbitrarily close to the best rate <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>O</mi><mo>(</mo><msup><mi>m</mi><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>)</mo></mrow></semantics></math></inline-formula> under some mild assumptions.
first_indexed 2024-03-10T00:43:52Z
format Article
id doaj.art-1e9dc1ad91594ec3b87e150702708b29
institution Directory Open Access Journal
issn 2075-1680
language English
last_indexed 2024-03-10T00:43:52Z
publishDate 2022-08-01
publisher MDPI AG
record_format Article
series Axioms
spelling doaj.art-1e9dc1ad91594ec3b87e150702708b292023-11-23T15:01:59ZengMDPI AGAxioms2075-16802022-08-0111943710.3390/axioms11090437Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy AlgorithmsWenhui Zhang0Peixin Ye1Shuo Xing2Xu Xu3School of Mathematics and LPMC, Nankai University, Tianjin 300071, ChinaSchool of Mathematics and LPMC, Nankai University, Tianjin 300071, ChinaSchool of Mathematics and LPMC, Nankai University, Tianjin 300071, ChinaSchool of Science, China University of Geosciences, Beijing 100083, ChinaWe propose the Weak Rescaled Pure Super Greedy Algorithm (WRPSGA) for approximation with respect to a dictionary <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="script">D</mi></semantics></math></inline-formula> in Hilbert space. The WRPSGA is simpler than some popular greedy algorithms. We show that the convergence rate of the RPSGA on the closure of the convex hull of the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula>-coherent dictionary <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="script">D</mi></semantics></math></inline-formula> is optimal. Then, we design the Rescaled Pure Super Greedy Learning Algorithm (RPSGLA) for kernel-based supervised learning. We prove that the convergence rate of the RPSGLA can be arbitrarily close to the best rate <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>O</mi><mo>(</mo><msup><mi>m</mi><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>)</mo></mrow></semantics></math></inline-formula> under some mild assumptions.https://www.mdpi.com/2075-1680/11/9/437super greedy algorithmssparse approximationsupervised learningconvergence rate
spellingShingle Wenhui Zhang
Peixin Ye
Shuo Xing
Xu Xu
Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
Axioms
super greedy algorithms
sparse approximation
supervised learning
convergence rate
title Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
title_full Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
title_fullStr Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
title_full_unstemmed Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
title_short Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
title_sort optimality of the approximation and learning by the rescaled pure super greedy algorithms
topic super greedy algorithms
sparse approximation
supervised learning
convergence rate
url https://www.mdpi.com/2075-1680/11/9/437
work_keys_str_mv AT wenhuizhang optimalityoftheapproximationandlearningbytherescaledpuresupergreedyalgorithms
AT peixinye optimalityoftheapproximationandlearningbytherescaledpuresupergreedyalgorithms
AT shuoxing optimalityoftheapproximationandlearningbytherescaledpuresupergreedyalgorithms
AT xuxu optimalityoftheapproximationandlearningbytherescaledpuresupergreedyalgorithms