Adaptive Stochastic Conjugate Gradient Optimization for Backpropagation Neural Networks
Backpropagation neural networks are commonly utilized to solve complicated issues in various disciplines. However, optimizing their settings remains a significant task. Traditional gradient-based optimization methods, such as stochastic gradient descent (SGD), often exhibit slow convergence and hype...
Main Authors: | Ibrahim Abaker Targio Hashem, Fadele Ayotunde Alaba, Muhammad Haruna Jumare, Ashraf Osman Ibrahim, Anas Waleed Abulfaraj |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10445451/ |
Similar Items
-
Estimation of simultaneous equation models by backpropagation method using stochastic gradient descent
by: Belén Pérez-Sánchez, et al.
Published: (2024-10-01) -
Counterexamples for Noise Models of Stochastic Gradients
by: Vivak Patel
Published: (2023-12-01) -
Research on three-step accelerated gradient algorithm in deep learning
by: Yongqiang Lian, et al.
Published: (2022-01-01) -
Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
by: Jingcheng Zhou, et al.
Published: (2021-06-01) -
Recent Advances in Stochastic Gradient Descent in Deep Learning
by: Yingjie Tian, et al.
Published: (2023-01-01)