Damped Newton Stochastic Gradient Descent Method for Neural Networks Training

First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account o...

詳細記述

書誌詳細
主要な著者: Jingcheng Zhou, Wei Wei, Ruizhi Zhang, Zhiming Zheng
フォーマット: 論文
言語:English
出版事項: MDPI AG 2021-06-01
シリーズ:Mathematics
主題:
オンライン・アクセス:https://www.mdpi.com/2227-7390/9/13/1533