Applications of probabilistic programming

<p>This thesis describes work on two applications of probabilistic programming: the learning of probabilistic program code given specifications, in particular program code of one-dimensional samplers; and the facilitation of sequential Monte Carlo inference with help of data-driven proposal...

Full description

Bibliographic Details
Main Author: Perov, Y
Other Authors: Wood, F
Format: Thesis
Language:English
Published: 2015
Subjects:
_version_ 1797078157522632704
author Perov, Y
author2 Wood, F
author_facet Wood, F
Perov, Y
author_sort Perov, Y
collection OXFORD
description <p>This thesis describes work on two applications of probabilistic programming: the learning of probabilistic program code given specifications, in particular program code of one-dimensional samplers; and the facilitation of sequential Monte Carlo inference with help of data-driven proposals. The latter is presented with experimental results on a linear Gaussian model and a non-parametric dependent Dirichlet process mixture of objects model for object recognition and tracking.</p> <p>We begin this work by providing a brief introduction to probabilistic programming.</p> <p>In the second Chapter we present an approach to automatic discovery of samplers in the form of probabilistic programs. Specifically, we learn the procedure code of samplers for one-dimensional distributions. We formulate a Bayesian approach to this problem by specifying a grammar-based prior over probabilistic program code. We use an approximate Bayesian computation method to learn the programs, whose executions generate samples that statistically match observed data or analytical characteristics of distributions of interest. In our experiments we leverage different probabilistic programming systems, including Anglican and Probabilistic C, to perform Markov chain Monte Carlo sampling over the space of programs. Experimental results have demonstrated that, using the proposed methodology, we can learn approximate and even some exact samplers. Finally, we show that our results are competive with regard to genetic programming methods.</p> <p>In Chapter 3, we describe a way to facilitate sequential Monte Carlo inference in probabilistic programming using data-driven proposals. In particular, we develop a distance-based proposal for the non-parametric dependant Dirichlet process mixture of objects model. We implement this approach in the probabilitic programming system Anglican, and show that for that model data-driven proposals provide significant perfomance improvements. We also explore the possibility of using neural networks to improve data-driven proposals.</p>
first_indexed 2024-03-07T00:28:29Z
format Thesis
id oxford-uuid:7ef28804-051c-4ff5-a4c2-bd591574741b
institution University of Oxford
language English
last_indexed 2024-03-07T00:28:29Z
publishDate 2015
record_format dspace
spelling oxford-uuid:7ef28804-051c-4ff5-a4c2-bd591574741b2022-03-26T21:13:34ZApplications of probabilistic programmingThesishttp://purl.org/coar/resource_type/c_bdccuuid:7ef28804-051c-4ff5-a4c2-bd591574741bComputer scienceMachine learningArtificial intelligenceEnglishORA Deposit2015Perov, YWood, F<p>This thesis describes work on two applications of probabilistic programming: the learning of probabilistic program code given specifications, in particular program code of one-dimensional samplers; and the facilitation of sequential Monte Carlo inference with help of data-driven proposals. The latter is presented with experimental results on a linear Gaussian model and a non-parametric dependent Dirichlet process mixture of objects model for object recognition and tracking.</p> <p>We begin this work by providing a brief introduction to probabilistic programming.</p> <p>In the second Chapter we present an approach to automatic discovery of samplers in the form of probabilistic programs. Specifically, we learn the procedure code of samplers for one-dimensional distributions. We formulate a Bayesian approach to this problem by specifying a grammar-based prior over probabilistic program code. We use an approximate Bayesian computation method to learn the programs, whose executions generate samples that statistically match observed data or analytical characteristics of distributions of interest. In our experiments we leverage different probabilistic programming systems, including Anglican and Probabilistic C, to perform Markov chain Monte Carlo sampling over the space of programs. Experimental results have demonstrated that, using the proposed methodology, we can learn approximate and even some exact samplers. Finally, we show that our results are competive with regard to genetic programming methods.</p> <p>In Chapter 3, we describe a way to facilitate sequential Monte Carlo inference in probabilistic programming using data-driven proposals. In particular, we develop a distance-based proposal for the non-parametric dependant Dirichlet process mixture of objects model. We implement this approach in the probabilitic programming system Anglican, and show that for that model data-driven proposals provide significant perfomance improvements. We also explore the possibility of using neural networks to improve data-driven proposals.</p>
spellingShingle Computer science
Machine learning
Artificial intelligence
Perov, Y
Applications of probabilistic programming
title Applications of probabilistic programming
title_full Applications of probabilistic programming
title_fullStr Applications of probabilistic programming
title_full_unstemmed Applications of probabilistic programming
title_short Applications of probabilistic programming
title_sort applications of probabilistic programming
topic Computer science
Machine learning
Artificial intelligence
work_keys_str_mv AT perovy applicationsofprobabilisticprogramming