Convergence rates for the stochastic gradient descent method for non-convex objective functions

We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradient descent method in the case of not necessarily locally convex nor contracting objective functions. In particular, the analysis relies on a quantitative use of mini-batches to control the loss of it...

Popoln opis

Bibliografske podrobnosti
Main Authors: Fehrman, B, Gess, B, Jentzen, A
Format: Journal article
Jezik:English
Izdano: Journal of Machine Learning Research 2020