Bayesian learning via stochastic gradient langevin dynamics

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior...

Ful tanımlama

Detaylı Bibliyografya
Asıl Yazarlar: Welling, M, Teh, Y
Materyal Türü: Journal article
Dil:English
Baskı/Yayın Bilgisi: 2011