Bayesian learning via stochastic gradient langevin dynamics
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior...
Main Authors: | , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
2011
|