Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization
Gradient methods are preferred for training and pruning neural networks because regularization terms are primarily intended to remove redundant weights from neural networks. Many machine learning libraries use elastic net regularization (ENR) also called double regularization, which is a combination...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
IFSA Publishing, S.L.
2023-05-01
|
Series: | Sensors & Transducers |
Subjects: | |
Online Access: | https://sensorsportal.com/HTML/DIGEST/may_2023/Vol_260/P_3289.pdf |
_version_ | 1797743578422706176 |
---|---|
author | Khidir Shaib Mohamed |
author_facet | Khidir Shaib Mohamed |
author_sort | Khidir Shaib Mohamed |
collection | DOAJ |
description | Gradient methods are preferred for training and pruning neural networks because regularization terms are primarily intended to remove redundant weights from neural networks. Many machine learning libraries use elastic net regularization (ENR) also called double regularization, which is a combination of and regularizations which tends to have a grouping effect in which correlated input features are given equal weights. This paper proposes a batch gradient method with smoothing elastic net regularization for pruning feedforward polynomial neural networks (FFPNNs), especially pi-sigma neural networks (PSNNs). Unfortunately, since elastic net regularization contains the 1-norm, is non-differentiable, and does not produce an NP-hard problem, it is not possible to use the gradient method directly. We attempt to replace the 1-norm and end up with the smoothing elastic net regularization in order to overcome this obstacle by using a differentiable and continuous function. The monotonicity theorem and two convergence theorems, including a weak convergence and a strong convergence, are established under this circumstance. The validity of the proposed theorems is supported by the experimental findings. According to the numerical results, the smoothing double regularization improved generalization performance and accelerated the learning process. |
first_indexed | 2024-03-12T14:57:28Z |
format | Article |
id | doaj.art-5416a8c91f3f464d8e0a02fbfeb61e90 |
institution | Directory Open Access Journal |
issn | 2306-8515 1726-5479 |
language | English |
last_indexed | 2024-03-12T14:57:28Z |
publishDate | 2023-05-01 |
publisher | IFSA Publishing, S.L. |
record_format | Article |
series | Sensors & Transducers |
spelling | doaj.art-5416a8c91f3f464d8e0a02fbfeb61e902023-08-14T16:03:34ZengIFSA Publishing, S.L.Sensors & Transducers2306-85151726-54792023-05-0126011423Pruning Feedforward Polynomial Neural with Smoothing Elastic Net RegularizationKhidir Shaib Mohamed0Department of Mathematics, College of Sciences and Arts in Uglat Asugour, Qassim UniversityGradient methods are preferred for training and pruning neural networks because regularization terms are primarily intended to remove redundant weights from neural networks. Many machine learning libraries use elastic net regularization (ENR) also called double regularization, which is a combination of and regularizations which tends to have a grouping effect in which correlated input features are given equal weights. This paper proposes a batch gradient method with smoothing elastic net regularization for pruning feedforward polynomial neural networks (FFPNNs), especially pi-sigma neural networks (PSNNs). Unfortunately, since elastic net regularization contains the 1-norm, is non-differentiable, and does not produce an NP-hard problem, it is not possible to use the gradient method directly. We attempt to replace the 1-norm and end up with the smoothing elastic net regularization in order to overcome this obstacle by using a differentiable and continuous function. The monotonicity theorem and two convergence theorems, including a weak convergence and a strong convergence, are established under this circumstance. The validity of the proposed theorems is supported by the experimental findings. According to the numerical results, the smoothing double regularization improved generalization performance and accelerated the learning process.https://sensorsportal.com/HTML/DIGEST/may_2023/Vol_260/P_3289.pdfconvergencebatch gradient methodsmoothing elastic net regularizationpi-sigma neural networks |
spellingShingle | Khidir Shaib Mohamed Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization Sensors & Transducers convergence batch gradient method smoothing elastic net regularization pi-sigma neural networks |
title | Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization |
title_full | Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization |
title_fullStr | Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization |
title_full_unstemmed | Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization |
title_short | Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization |
title_sort | pruning feedforward polynomial neural with smoothing elastic net regularization |
topic | convergence batch gradient method smoothing elastic net regularization pi-sigma neural networks |
url | https://sensorsportal.com/HTML/DIGEST/may_2023/Vol_260/P_3289.pdf |
work_keys_str_mv | AT khidirshaibmohamed pruningfeedforwardpolynomialneuralwithsmoothingelasticnetregularization |