On Selection of a Benchmark by Determining the Algorithms’ Qualities
The authors got the motivation for writing the article based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with th...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9350587/ |
_version_ | 1828406907876933632 |
---|---|
author | Iztok Fister Janez Brest Andres Iglesias Akemi Galvez Suash Deb Iztok Fister |
author_facet | Iztok Fister Janez Brest Andres Iglesias Akemi Galvez Suash Deb Iztok Fister |
author_sort | Iztok Fister |
collection | DOAJ |
description | The authors got the motivation for writing the article based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with this, the CEC Competitions on Real-Parameter Single-Objective Optimization benchmarks that were issued several times in the last decade, serve as a testbed for evaluating the collection of nature-inspired algorithms selected in our study. Indeed, this article addresses two research questions: (1) How the selected benchmark affects the ranking of the particular algorithm, and (2) If it is possible to find the best algorithm capable of outperforming all the others on all the selected benchmarks. Ten outstanding algorithms (also winners of particular competitions) from different periods in the last decade were collected and applied to benchmarks issued during the same time period. A comparative analysis showed that there is a strong correlation between the rankings of the algorithms and the benchmarks used, although some deviations arose in ranking the best algorithms. The possible reasons for these deviations were exposed and commented on. |
first_indexed | 2024-12-10T11:17:41Z |
format | Article |
id | doaj.art-f1ef1ac6d65c469695cf5dc3f317deb4 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-10T11:17:41Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-f1ef1ac6d65c469695cf5dc3f317deb42022-12-22T01:51:05ZengIEEEIEEE Access2169-35362021-01-019511665117810.1109/ACCESS.2021.30582859350587On Selection of a Benchmark by Determining the Algorithms’ QualitiesIztok Fister0Janez Brest1https://orcid.org/0000-0001-5864-3533Andres Iglesias2https://orcid.org/0000-0002-5672-8274Akemi Galvez3https://orcid.org/0000-0002-2100-2289Suash Deb4https://orcid.org/0000-0002-7276-4400Iztok Fister5https://orcid.org/0000-0002-6418-1272Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, SloveniaFaculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, SloveniaDepartment of Applied Mathematics and Computational Sciences, University of Cantabria, Santander, SpainDepartment of Applied Mathematics and Computational Sciences, University of Cantabria, Santander, SpainIT & Educational Consultant, Ranchi, IndiaFaculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, SloveniaThe authors got the motivation for writing the article based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with this, the CEC Competitions on Real-Parameter Single-Objective Optimization benchmarks that were issued several times in the last decade, serve as a testbed for evaluating the collection of nature-inspired algorithms selected in our study. Indeed, this article addresses two research questions: (1) How the selected benchmark affects the ranking of the particular algorithm, and (2) If it is possible to find the best algorithm capable of outperforming all the others on all the selected benchmarks. Ten outstanding algorithms (also winners of particular competitions) from different periods in the last decade were collected and applied to benchmarks issued during the same time period. A comparative analysis showed that there is a strong correlation between the rankings of the algorithms and the benchmarks used, although some deviations arose in ranking the best algorithms. The possible reasons for these deviations were exposed and commented on.https://ieeexplore.ieee.org/document/9350587/Evolutionary algorithmsbenchmark functionsdifferential evolution |
spellingShingle | Iztok Fister Janez Brest Andres Iglesias Akemi Galvez Suash Deb Iztok Fister On Selection of a Benchmark by Determining the Algorithms’ Qualities IEEE Access Evolutionary algorithms benchmark functions differential evolution |
title | On Selection of a Benchmark by Determining the Algorithms’ Qualities |
title_full | On Selection of a Benchmark by Determining the Algorithms’ Qualities |
title_fullStr | On Selection of a Benchmark by Determining the Algorithms’ Qualities |
title_full_unstemmed | On Selection of a Benchmark by Determining the Algorithms’ Qualities |
title_short | On Selection of a Benchmark by Determining the Algorithms’ Qualities |
title_sort | on selection of a benchmark by determining the algorithms x2019 qualities |
topic | Evolutionary algorithms benchmark functions differential evolution |
url | https://ieeexplore.ieee.org/document/9350587/ |
work_keys_str_mv | AT iztokfister onselectionofabenchmarkbydeterminingthealgorithmsx2019qualities AT janezbrest onselectionofabenchmarkbydeterminingthealgorithmsx2019qualities AT andresiglesias onselectionofabenchmarkbydeterminingthealgorithmsx2019qualities AT akemigalvez onselectionofabenchmarkbydeterminingthealgorithmsx2019qualities AT suashdeb onselectionofabenchmarkbydeterminingthealgorithmsx2019qualities AT iztokfister onselectionofabenchmarkbydeterminingthealgorithmsx2019qualities |