Cascading Regularized Classifiers
Among the various methods to combine classifiers, Boosting was originally thought as an stratagem to cascade pairs of classifiers through their disagreement. I recover the same idea from the work of Niyogi et al. to show how to loosen the requirement of weak learnability, central to Boosting, and in...
Main Author: | Perez-Breva, Luis |
---|---|
Language: | en_US |
Published: |
2005
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/30463 |
Similar Items
-
Model Selection in Summary Evaluation
by: Perez-Breva, Luis, et al.
Published: (2004) -
Regularization Through Feature Knock Out
by: Wolf, Lior, et al.
Published: (2005) -
Bagging Regularizes
by: Poggio, Tomaso, et al.
Published: (2004) -
On the Dirichlet Prior and Bayesian Regularization
by: Steck, Harald, et al.
Published: (2004) -
Asymptotics of Gaussian Regularized Least-Squares
by: Lippert, Ross, et al.
Published: (2005)