Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account o...
Автори: | Jingcheng Zhou, Wei Wei, Ruizhi Zhang, Zhiming Zheng |
---|---|
Формат: | Стаття |
Мова: | English |
Опубліковано: |
MDPI AG
2021-06-01
|
Серія: | Mathematics |
Предмети: | |
Онлайн доступ: | https://www.mdpi.com/2227-7390/9/13/1533 |
Схожі ресурси
Схожі ресурси
-
Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization
за авторством: Ruijuan Chen, та інші
Опубліковано: (2022-11-01) -
The Improved Stochastic Fractional Order Gradient Descent Algorithm
за авторством: Yang Yang, та інші
Опубліковано: (2023-08-01) -
Recent Advances in Stochastic Gradient Descent in Deep Learning
за авторством: Yingjie Tian, та інші
Опубліковано: (2023-01-01) -
A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics
за авторством: Rita Fioresi, та інші
Опубліковано: (2020-01-01) -
Stochastic gradient descent with random label noises: doubly stochastic models and inference stabilizer
за авторством: Haoyi Xiong, та інші
Опубліковано: (2024-01-01)