Carathéodory sampling for stochastic gradient descent
Many problems require to optimize empirical risk functions over large data sets. Gradient descent methods that calculate the full gradient in every descent step do not scale to such datasets. Various flavours of Stochastic Gradient Descent (SGD) replace the expensive summation that computes the full...
Main Authors: | Cosentino, F, Oberhauser, H, Abate, A |
---|---|
Format: | Internet publication |
Language: | English |
Published: |
2020
|
Similar Items
-
Carathéodory sampling for stochastic gradient descent
by: Cosentino, F, et al.
Published: (2020) -
A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization
by: Giulio Franzese, et al.
Published: (2021-10-01) -
Pipelined Stochastic Gradient Descent with Taylor Expansion
by: Bongwon Jang, et al.
Published: (2023-10-01) -
Stochastic gradient descent for wind farm optimization
by: J. Quick, et al.
Published: (2023-08-01) -
Stochastic gradient descent for optimization for nuclear systems
by: Austin Williams, et al.
Published: (2023-05-01)