Bayesian learning via stochastic gradient langevin dynamics
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior...
Main Authors: | Welling, M, Teh, Y |
---|---|
Format: | Journal article |
Language: | English |
Published: |
2011
|
Similar Items
-
Consistency and fluctuations for stochastic gradient Langevin dynamics
by: Teh, YW, et al.
Published: (2016) -
Exploration of the (non-)asymptotic bias and variance of stochastic gradient Langevin dynamics
by: Vollmer, S, et al.
Published: (2016) -
Distributed Bayesian learning with stochastic natural gradient expectation propagation and the posterior server
by: Hasenclver, L, et al.
Published: (2017) -
Langevin dynamics of spreading and wetting.
by: Abraham, D, et al.
Published: (1990) -
A unified stochastic gradient approach to designing Bayesian-optimal experiments
by: Foster, A, et al.
Published: (2020)