hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras

This paper presents the ‘hyper-sinh’, a variation of the m-arcsinh activation function suit-able for Deep Learning (DL)-based algorithms for supervised learning, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), such as the Long Short-Term Memory (LSTM). hyper-sinh,...

Full description

Bibliographic Details
Main Authors: Luca Parisi, Renfei Ma, Narrendar RaviChandran, Matteo Lanzillotta
Format: Article
Language:English
Published: Elsevier 2021-12-01
Series:Machine Learning with Applications
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666827021000566
_version_ 1819096842397286400
author Luca Parisi
Renfei Ma
Narrendar RaviChandran
Matteo Lanzillotta
author_facet Luca Parisi
Renfei Ma
Narrendar RaviChandran
Matteo Lanzillotta
author_sort Luca Parisi
collection DOAJ
description This paper presents the ‘hyper-sinh’, a variation of the m-arcsinh activation function suit-able for Deep Learning (DL)-based algorithms for supervised learning, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), such as the Long Short-Term Memory (LSTM). hyper-sinh, developed in the open-source Python libraries TensorFlow and Keras, is thus described and validated as an accurate and reliable activation function for shallow and deep neural networks. Improvements in accuracy and reliability in image and text classification tasks on six (N=6) medium-to-large open-source benchmark datasets are discussed. Experimental results demonstrate that the overall competitive classification performance of the novel hyper-sinh function on shallow and deep neural networks yielded the highest performance. Furthermore, this activation is evaluated against other gold standard activation functions, demonstrating its overall competitive accuracy and reliability for both image and text classification tasks.
first_indexed 2024-12-22T00:05:37Z
format Article
id doaj.art-eb18be0bf13e46ab949344ae10030728
institution Directory Open Access Journal
issn 2666-8270
language English
last_indexed 2024-12-22T00:05:37Z
publishDate 2021-12-01
publisher Elsevier
record_format Article
series Machine Learning with Applications
spelling doaj.art-eb18be0bf13e46ab949344ae100307282022-12-21T18:45:34ZengElsevierMachine Learning with Applications2666-82702021-12-016100112hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and KerasLuca Parisi0Renfei Ma1Narrendar RaviChandran2Matteo Lanzillotta3Faculty of Business and Law (Artificial Intelligence Specialism), Coventry University, Coventry, United Kingdom; University of Auckland Rehabilitative Technologies Association (UARTA), University of Auckland, 11 Symonds Street, Auckland, 1010, New Zealand; Corresponding author at: Faculty of Business and Law (Artificial Intelligence Specialism), Coventry University, Coventry, United Kingdom.Warshel Institute for Computational Biology, The Chinese University of Hong Kong, Shenzhen (CUHK-SZ), Shenzhen, China; University of Auckland Rehabilitative Technologies Association (UARTA), University of Auckland, 11 Symonds Street, Auckland, 1010, New ZealandUniversity of Auckland Rehabilitative Technologies Association (UARTA), University of Auckland, 11 Symonds Street, Auckland, 1010, New ZealandDepartment of Counselling Psychology and Psychotherapy, Centro Studi Eteropoiesi, Turin, Italy; University of Auckland Rehabilitative Technologies Association (UARTA), University of Auckland, 11 Symonds Street, Auckland, 1010, New ZealandThis paper presents the ‘hyper-sinh’, a variation of the m-arcsinh activation function suit-able for Deep Learning (DL)-based algorithms for supervised learning, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), such as the Long Short-Term Memory (LSTM). hyper-sinh, developed in the open-source Python libraries TensorFlow and Keras, is thus described and validated as an accurate and reliable activation function for shallow and deep neural networks. Improvements in accuracy and reliability in image and text classification tasks on six (N=6) medium-to-large open-source benchmark datasets are discussed. Experimental results demonstrate that the overall competitive classification performance of the novel hyper-sinh function on shallow and deep neural networks yielded the highest performance. Furthermore, this activation is evaluated against other gold standard activation functions, demonstrating its overall competitive accuracy and reliability for both image and text classification tasks.http://www.sciencedirect.com/science/article/pii/S2666827021000566ActivationDeep learningConvolutional Neural NetworkLong short-term memoryTensorFlowKeras
spellingShingle Luca Parisi
Renfei Ma
Narrendar RaviChandran
Matteo Lanzillotta
hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
Machine Learning with Applications
Activation
Deep learning
Convolutional Neural Network
Long short-term memory
TensorFlow
Keras
title hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
title_full hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
title_fullStr hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
title_full_unstemmed hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
title_short hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
title_sort hyper sinh an accurate and reliable function from shallow to deep learning in tensorflow and keras
topic Activation
Deep learning
Convolutional Neural Network
Long short-term memory
TensorFlow
Keras
url http://www.sciencedirect.com/science/article/pii/S2666827021000566
work_keys_str_mv AT lucaparisi hypersinhanaccurateandreliablefunctionfromshallowtodeeplearningintensorflowandkeras
AT renfeima hypersinhanaccurateandreliablefunctionfromshallowtodeeplearningintensorflowandkeras
AT narrendarravichandran hypersinhanaccurateandreliablefunctionfromshallowtodeeplearningintensorflowandkeras
AT matteolanzillotta hypersinhanaccurateandreliablefunctionfromshallowtodeeplearningintensorflowandkeras