Almost sure convergence of randomised‐difference descent algorithm for stochastic convex optimisation

Abstract Stochastic gradient descent algorithm is a classical and useful method for stochastic optimisation. While stochastic gradient descent has been theoretically investigated for decades and successfully applied in machine learning such as training of deep neural networks, it essentially relies...

Full description

Bibliographic Details
Main Authors: Xiaoxue Geng, Gao Huang, Wenxiao Zhao
Format: Article
Language:English
Published: Wiley 2021-11-01
Series:IET Control Theory & Applications
Subjects:
Online Access:https://doi.org/10.1049/cth2.12184