AG-SGD: Angle-Based Stochastic Gradient Descent
In the field of neural network, stochastic gradient descent is often employed as an effective method of accelerating the result's convergence. Generating the new gradient from the past gradient is a common method adopted by many existing optimization algorithms. Since the past gradient is not c...
Main Authors: | Chongya Song, Alexander Pons, Kang Yen |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9343305/ |
Similar Items
-
aSGD: Stochastic Gradient Descent with Adaptive Batch Size for Every Parameter
by: Haoze Shi, et al.
Published: (2022-03-01) -
Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
by: Jingcheng Zhou, et al.
Published: (2021-06-01) -
Recent Advances in Stochastic Gradient Descent in Deep Learning
by: Yingjie Tian, et al.
Published: (2023-01-01) -
A Novel Adaptive PID Controller Design for a PEM Fuel Cell Using Stochastic Gradient Descent with Momentum Enhanced by Whale Optimizer
by: Mohammed Yousri Silaa, et al.
Published: (2022-08-01) -
Adaptive Gradient Estimation Stochastic Parallel Gradient Descent Algorithm for Laser Beam Cleanup
by: Shiqing Ma, et al.
Published: (2021-05-01)