Boosting Simple Learners

Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the...

Full description

Bibliographic Details
Main Authors: Noga Alon, Alon Gonen, Elad Hazan, Shay Moran
Format: Article
Language:English
Published: TheoretiCS Foundation e.V. 2023-06-01
Series:TheoretiCS
Subjects:
Online Access:https://theoretics.episciences.org/9253/pdf
_version_ 1797333138212388864
author Noga Alon
Alon Gonen
Elad Hazan
Shay Moran
author_facet Noga Alon
Alon Gonen
Elad Hazan
Shay Moran
author_sort Noga Alon
collection DOAJ
description Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma^2})$ weak hypotheses with $\gamma$-margin are sometimes necessary, our new method requires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are "far away" from the class be learned? Towards answering the first question we {introduce combinatorial-geometric parameters which capture expressivity in boosting.} As a corollary we provide an affirmative answer to the second question for well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.
first_indexed 2024-03-08T07:59:26Z
format Article
id doaj.art-3b38bca5a2254025b87d353e58c53b7a
institution Directory Open Access Journal
issn 2751-4838
language English
last_indexed 2024-03-08T07:59:26Z
publishDate 2023-06-01
publisher TheoretiCS Foundation e.V.
record_format Article
series TheoretiCS
spelling doaj.art-3b38bca5a2254025b87d353e58c53b7a2024-02-02T12:32:47ZengTheoretiCS Foundation e.V.TheoretiCS2751-48382023-06-01Volume 210.46298/theoretics.23.89253Boosting Simple LearnersNoga AlonAlon GonenElad Hazanhttps://orcid.org/0000-0002-1566-3216Shay Moranhttps://orcid.org/0000-0002-8662-2737Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma^2})$ weak hypotheses with $\gamma$-margin are sometimes necessary, our new method requires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are "far away" from the class be learned? Towards answering the first question we {introduce combinatorial-geometric parameters which capture expressivity in boosting.} As a corollary we provide an affirmative answer to the second question for well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.https://theoretics.episciences.org/9253/pdfcomputer science - machine learningstatistics - machine learning
spellingShingle Noga Alon
Alon Gonen
Elad Hazan
Shay Moran
Boosting Simple Learners
TheoretiCS
computer science - machine learning
statistics - machine learning
title Boosting Simple Learners
title_full Boosting Simple Learners
title_fullStr Boosting Simple Learners
title_full_unstemmed Boosting Simple Learners
title_short Boosting Simple Learners
title_sort boosting simple learners
topic computer science - machine learning
statistics - machine learning
url https://theoretics.episciences.org/9253/pdf
work_keys_str_mv AT nogaalon boostingsimplelearners
AT alongonen boostingsimplelearners
AT eladhazan boostingsimplelearners
AT shaymoran boostingsimplelearners