Bayesian learning via stochastic gradient langevin dynamics

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior...

Full description

Bibliographic Details
Main Authors: Welling, M, Teh, Y
Format: Journal article
Language:English
Published: 2011