Damped Newton Stochastic Gradient Descent Method for Neural Networks Training

First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account o...

Volledige beschrijving

Bibliografische gegevens
Hoofdauteurs: Jingcheng Zhou, Wei Wei, Ruizhi Zhang, Zhiming Zheng
Formaat: Artikel
Taal:English
Gepubliceerd in: MDPI AG 2021-06-01
Reeks:Mathematics
Onderwerpen:
Online toegang:https://www.mdpi.com/2227-7390/9/13/1533