Developing Novel Robust Loss Functions-Based Classification Layers for DLLSTM Neural Networks

In this paper, we suggest improving the performance of developed activation function-based Deep Learning Long Short-Term Memory (DLLSTM) structures by employing robust loss functions like Mean Absolute Error <inline-formula> <tex-math notation="LaTeX">$(MAE)$ </tex-math>&...

Full description

Bibliographic Details
Main Authors: Mohamad Abou Houran, Mohamed H. Essai Ali, Adel B. Abdel-Raman, Eman A. Badry, Alaaeldien Hassan, Hany A. Atallah
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10124197/
Description
Summary:In this paper, we suggest improving the performance of developed activation function-based Deep Learning Long Short-Term Memory (DLLSTM) structures by employing robust loss functions like Mean Absolute Error <inline-formula> <tex-math notation="LaTeX">$(MAE)$ </tex-math></inline-formula> and Sum Squared Error <inline-formula> <tex-math notation="LaTeX">$(SSE)$ </tex-math></inline-formula> to create new classification layers. The classification layer is the last layer in any DLLSTM neural network structure where the loss function resides. The LSTM is an improved recurrent neural network that fixes the problem of the vanishing gradient that goes away and other issues. Fast convergence and optimum performance depend on the loss function. Three loss functions (default<inline-formula> <tex-math notation="LaTeX">$(Crossentropyex)$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$(MAE)$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$(SSE))$ </tex-math></inline-formula> that compute the error between the actual and desired output for two distinct applications were used to examine the effectiveness of the suggested DLLSTM classifier. The results show that one of the suggested classifiers&#x2019; specific loss functions <inline-formula> <tex-math notation="LaTeX">$(SSE))$ </tex-math></inline-formula> works better than other loss functions and does a great job. The suggested functions <inline-formula> <tex-math notation="LaTeX">$Softsign$ </tex-math></inline-formula>, Modified-Elliott, Root-sig, Bi-tanh1, Bi-tanh2, <inline-formula> <tex-math notation="LaTeX">$Sech$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$wave$ </tex-math></inline-formula> are more accurate than the tanh function.
ISSN:2169-3536