Enhancing neural network classification using fractional-order activation functions
In this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (RLCFD). This study investigates the use of fractional activation functions in Multilayer Perceptron (MLP) models and their impact on the perfor...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
KeAi Communications Co. Ltd.
2024-01-01
|
Series: | AI Open |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S266665102300030X |
_version_ | 1797371329074167808 |
---|---|
author | Meshach Kumar Utkal Mehta Giansalvo Cirrincione |
author_facet | Meshach Kumar Utkal Mehta Giansalvo Cirrincione |
author_sort | Meshach Kumar |
collection | DOAJ |
description | In this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (RLCFD). This study investigates the use of fractional activation functions in Multilayer Perceptron (MLP) models and their impact on the performance of classification tasks, verified using the IRIS, MNIST and FMNIST datasets. Fractional activation functions introduce a non-integer power exponent, allowing for improved capturing of complex patterns and representations. The experiment compares MLP models employing fractional activation functions, such as fractional sigmoid, hyperbolic tangent and rectified linear units, against traditional models using standard activation functions, their improved versions and existing fractional functions. The numerical studies have confirmed the theoretical observations mentioned in the paper. The findings highlight the potential usage of new functions as a valuable tool in deep learning in classification. The study suggests incorporating fractional activation functions in MLP architectures can lead to superior accuracy and robustness. |
first_indexed | 2024-03-08T18:18:16Z |
format | Article |
id | doaj.art-14451d48de4b4ba281bc9a7f0d130868 |
institution | Directory Open Access Journal |
issn | 2666-6510 |
language | English |
last_indexed | 2024-03-08T18:18:16Z |
publishDate | 2024-01-01 |
publisher | KeAi Communications Co. Ltd. |
record_format | Article |
series | AI Open |
spelling | doaj.art-14451d48de4b4ba281bc9a7f0d1308682023-12-31T04:28:26ZengKeAi Communications Co. Ltd.AI Open2666-65102024-01-0151022Enhancing neural network classification using fractional-order activation functionsMeshach Kumar0Utkal Mehta1Giansalvo Cirrincione2Discipline of Electrical and Electronic Engineering, The University of the South Pacific, Laucala Campus, Fiji; Corresponding author.Discipline of Electrical and Electronic Engineering, The University of the South Pacific, Laucala Campus, FijiLab. LTI, University of Picardie Jules Verne, Amiens, FranceIn this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (RLCFD). This study investigates the use of fractional activation functions in Multilayer Perceptron (MLP) models and their impact on the performance of classification tasks, verified using the IRIS, MNIST and FMNIST datasets. Fractional activation functions introduce a non-integer power exponent, allowing for improved capturing of complex patterns and representations. The experiment compares MLP models employing fractional activation functions, such as fractional sigmoid, hyperbolic tangent and rectified linear units, against traditional models using standard activation functions, their improved versions and existing fractional functions. The numerical studies have confirmed the theoretical observations mentioned in the paper. The findings highlight the potential usage of new functions as a valuable tool in deep learning in classification. The study suggests incorporating fractional activation functions in MLP architectures can lead to superior accuracy and robustness.http://www.sciencedirect.com/science/article/pii/S266665102300030XFractional calculusNeural networksClassificationMultilayer perceptronActivation functionsAccuracy |
spellingShingle | Meshach Kumar Utkal Mehta Giansalvo Cirrincione Enhancing neural network classification using fractional-order activation functions AI Open Fractional calculus Neural networks Classification Multilayer perceptron Activation functions Accuracy |
title | Enhancing neural network classification using fractional-order activation functions |
title_full | Enhancing neural network classification using fractional-order activation functions |
title_fullStr | Enhancing neural network classification using fractional-order activation functions |
title_full_unstemmed | Enhancing neural network classification using fractional-order activation functions |
title_short | Enhancing neural network classification using fractional-order activation functions |
title_sort | enhancing neural network classification using fractional order activation functions |
topic | Fractional calculus Neural networks Classification Multilayer perceptron Activation functions Accuracy |
url | http://www.sciencedirect.com/science/article/pii/S266665102300030X |
work_keys_str_mv | AT meshachkumar enhancingneuralnetworkclassificationusingfractionalorderactivationfunctions AT utkalmehta enhancingneuralnetworkclassificationusingfractionalorderactivationfunctions AT giansalvocirrincione enhancingneuralnetworkclassificationusingfractionalorderactivationfunctions |