Training Neural Networks by Time-Fractional Gradient Descent
Motivated by the weighted averaging method for training neural networks, we study the time-fractional gradient descent (TFGD) method based on the time-fractional gradient flow and explore the influence of memory dependence on neural network training. The TFGD algorithm in this paper is studied via t...
Main Authors: | Jingyi Xie, Sirui Li |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-09-01
|
Series: | Axioms |
Subjects: | |
Online Access: | https://www.mdpi.com/2075-1680/11/10/507 |
Similar Items
-
Function approximation method based on weights gradient descent in reinforcement learning
by: Xiaoyan QIN, et al.
Published: (2023-08-01) -
Function approximation method based on weights gradient descent in reinforcement learning
by: Xiaoyan QIN, Yuhan LIU, Yunlong XU, Bin LI
Published: (2023-08-01) -
Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach
by: Xiaoling Wang, et al.
Published: (2021-10-01) -
Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
by: Jingcheng Zhou, et al.
Published: (2021-06-01) -
The Improved Stochastic Fractional Order Gradient Descent Algorithm
by: Yang Yang, et al.
Published: (2023-08-01)