Bayesian learning via stochastic gradient langevin dynamics
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior...
Asıl Yazarlar: | , |
---|---|
Materyal Türü: | Journal article |
Dil: | English |
Baskı/Yayın Bilgisi: |
2011
|