KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models
Computer vision (CV) application is becoming a crucial factor for the growth of developed economies in the world. The widespread use of CV applications has created a growing demand for accurate models. Therefore, the subfields of CV focus on improving existing models and developing new methods and a...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10025707/ |
_version_ | 1797938209392427008 |
---|---|
author | Javokhir Musaev Abdulaziz Anorboev Ngoc Thanh Nguyen Dosam Hwang |
author_facet | Javokhir Musaev Abdulaziz Anorboev Ngoc Thanh Nguyen Dosam Hwang |
author_sort | Javokhir Musaev |
collection | DOAJ |
description | Computer vision (CV) application is becoming a crucial factor for the growth of developed economies in the world. The widespread use of CV applications has created a growing demand for accurate models. Therefore, the subfields of CV focus on improving existing models and developing new methods and algorithms to meet the demands of different sectors. Simultaneously, research on ensemble learning provides effective tools for increasing the accuracies of the models. Nevertheless, there is a significant gap in research using data representation and model features. This led us to develop KeepNMax—an ensemble of image channels and epochs using the top N maximum prediction probabilities at the final step. Using KeepNMax, the ensemble error was reduced and increased the amount of data knowledge of the ensemble model. Nine datasets were trained. As long as each dataset had three channels, the images were divided into three different channels and trained them separately using the same model architecture. In addition, the datasets were trained without dividing them into different channels using the same model architecture. After completing the training, some epochs of the training were ensembled to the best epoch in the training. In addition, two different model architectures were used to check the model dependency of the proposed method and achieved remarkable results in both cases. This method was proposed for deep-learning classification models. Despite its simplicity, proposed method improved the results of the CNN and ConvMixer models for the datasets used. Classic training, bootstrap aggregation, and random split methods were used as the baseline methods. For most datasets, significant results were obtained using KeepNMax. The success of the method was explained by the unique true prediction (<inline-formula> <tex-math notation="LaTeX">$UTP$ </tex-math></inline-formula>) scope of each model. By ensembling the models, the prediction scope of the model was enlarged, allowing it to represent broader knowledge about datasets than a simple model. |
first_indexed | 2024-04-10T18:56:12Z |
format | Article |
id | doaj.art-cedb4c4c906b40b0ad7cfbc663b1656a |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-04-10T18:56:12Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-cedb4c4c906b40b0ad7cfbc663b1656a2023-02-01T00:00:32ZengIEEEIEEE Access2169-35362023-01-01119339935010.1109/ACCESS.2023.323965810025707KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning ModelsJavokhir Musaev0https://orcid.org/0000-0003-4656-0479Abdulaziz Anorboev1https://orcid.org/0000-0003-1416-7138Ngoc Thanh Nguyen2https://orcid.org/0000-0002-3247-2948Dosam Hwang3https://orcid.org/0000-0001-7851-7323Department of Computer Engineering, Yeungnam University, Gyeongsan, South KoreaDepartment of Computer Engineering, Yeungnam University, Gyeongsan, South KoreaFaculty of Information and Communication Technology, Wroclaw University of Science and Technology, Wroclaw, PolandDepartment of Computer Engineering, Yeungnam University, Gyeongsan, South KoreaComputer vision (CV) application is becoming a crucial factor for the growth of developed economies in the world. The widespread use of CV applications has created a growing demand for accurate models. Therefore, the subfields of CV focus on improving existing models and developing new methods and algorithms to meet the demands of different sectors. Simultaneously, research on ensemble learning provides effective tools for increasing the accuracies of the models. Nevertheless, there is a significant gap in research using data representation and model features. This led us to develop KeepNMax—an ensemble of image channels and epochs using the top N maximum prediction probabilities at the final step. Using KeepNMax, the ensemble error was reduced and increased the amount of data knowledge of the ensemble model. Nine datasets were trained. As long as each dataset had three channels, the images were divided into three different channels and trained them separately using the same model architecture. In addition, the datasets were trained without dividing them into different channels using the same model architecture. After completing the training, some epochs of the training were ensembled to the best epoch in the training. In addition, two different model architectures were used to check the model dependency of the proposed method and achieved remarkable results in both cases. This method was proposed for deep-learning classification models. Despite its simplicity, proposed method improved the results of the CNN and ConvMixer models for the datasets used. Classic training, bootstrap aggregation, and random split methods were used as the baseline methods. For most datasets, significant results were obtained using KeepNMax. The success of the method was explained by the unique true prediction (<inline-formula> <tex-math notation="LaTeX">$UTP$ </tex-math></inline-formula>) scope of each model. By ensembling the models, the prediction scope of the model was enlarged, allowing it to represent broader knowledge about datasets than a simple model.https://ieeexplore.ieee.org/document/10025707/Epoch ensemblechannel ensembletop N prediction probabilities ensemble |
spellingShingle | Javokhir Musaev Abdulaziz Anorboev Ngoc Thanh Nguyen Dosam Hwang KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models IEEE Access Epoch ensemble channel ensemble top N prediction probabilities ensemble |
title | KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models |
title_full | KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models |
title_fullStr | KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models |
title_full_unstemmed | KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models |
title_short | KeepNMax: Keep N Maximum of Epoch-Channel Ensemble Method for Deep Learning Models |
title_sort | keepnmax keep n maximum of epoch channel ensemble method for deep learning models |
topic | Epoch ensemble channel ensemble top N prediction probabilities ensemble |
url | https://ieeexplore.ieee.org/document/10025707/ |
work_keys_str_mv | AT javokhirmusaev keepnmaxkeepnmaximumofepochchannelensemblemethodfordeeplearningmodels AT abdulazizanorboev keepnmaxkeepnmaximumofepochchannelensemblemethodfordeeplearningmodels AT ngocthanhnguyen keepnmaxkeepnmaximumofepochchannelensemblemethodfordeeplearningmodels AT dosamhwang keepnmaxkeepnmaximumofepochchannelensemblemethodfordeeplearningmodels |