Automatic Recommendation Method for Classifier Ensemble Structure Using Meta-Learning

Machine Learning (ML) is a field that aims to develop efficient techniques to provide intelligent decision making solutions to complex real problems. Among the different ML structures, a classifier ensemble has been successfully applied to several classification domains. A classifier ensemble is com...

Full description

Bibliographic Details
Main Authors: Robercy Alves Da Silva, Anne Magaly De Paula Canuto, Cephas Alves Da Silveira Barreto, Joao Carlos Xavier-Junior
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9493882/
Description
Summary:Machine Learning (ML) is a field that aims to develop efficient techniques to provide intelligent decision making solutions to complex real problems. Among the different ML structures, a classifier ensemble has been successfully applied to several classification domains. A classifier ensemble is composed of a set of classifiers (specialists) organized in a parallel way, and it is able to produce a combined decision for an input pattern (instance). Although Classifier ensembles have proved to be robust in several applications, an important issue is always brought to attention is the ensemble’s structure. In other words, the correction definition of its structure, like the number and type of classifiers and the aggregation method, has an important role in its performance. Usually, an exhaustive testing and evaluation process is required to better define the ideal structure for an ensemble. Aiming to produce an interesting investigation in this field, this paper proposes two new approaches for automatic recommendation of classifier ensemble structure, using meta-learning to recommend three of these important parameters: type of classifier, number of base classifiers, and the aggregation method. The main aim is to provide a robust structure in a simple and fast way. In this analysis, five well known classification algorithms will be used as base classifiers of the ensemble: kNN (Nearest Neighbors), DT (Decision Tree), RF (Random Forest), NB (Naive Bayes) e LR (Logistic Regression). Additionally, the classifier ensembles will be evaluated using seven different strategies as aggregation functions: HV (Hard Voting), SV (Soft Voting), LR (Logistic Regression), SVM (Support Vector Machine), NB(Naive Bayes), MLP (Multilayer perceptron) e DT (Decision Tree). The empirical analysis shows that our approach can lead to robust classifier ensembles, for the majority of the analysed cases.
ISSN:2169-3536