Measure Transformer Semantics for Bayesian Machine Learning

The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayes...

Full description

Bibliographic Details
Main Authors: Johannes Borgström, Andrew D Gordon, Michael Greenberg, James Margetson, Jurgen Van Gael
Format: Article
Language:English
Published: Logical Methods in Computer Science e.V. 2013-09-01
Series:Logical Methods in Computer Science
Subjects:
Online Access:https://lmcs.episciences.org/815/pdf
_version_ 1797268698716700672
author Johannes Borgström
Andrew D Gordon
Michael Greenberg
James Margetson
Jurgen Van Gael
author_facet Johannes Borgström
Andrew D Gordon
Michael Greenberg
James Margetson
Jurgen Van Gael
author_sort Johannes Borgström
collection DOAJ
description The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.
first_indexed 2024-04-25T01:36:37Z
format Article
id doaj.art-718952f3bd0343d9899c8284a7d05847
institution Directory Open Access Journal
issn 1860-5974
language English
last_indexed 2024-04-25T01:36:37Z
publishDate 2013-09-01
publisher Logical Methods in Computer Science e.V.
record_format Article
series Logical Methods in Computer Science
spelling doaj.art-718952f3bd0343d9899c8284a7d058472024-03-08T09:29:28ZengLogical Methods in Computer Science e.V.Logical Methods in Computer Science1860-59742013-09-01Volume 9, Issue 310.2168/LMCS-9(3:11)2013815Measure Transformer Semantics for Bayesian Machine LearningJohannes BorgströmAndrew D GordonMichael Greenberghttps://orcid.org/0000-0003-0014-7670James MargetsonJurgen Van GaelThe Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.https://lmcs.episciences.org/815/pdfcomputer science - logic in computer sciencecomputer science - artificial intelligencecomputer science - programming languages
spellingShingle Johannes Borgström
Andrew D Gordon
Michael Greenberg
James Margetson
Jurgen Van Gael
Measure Transformer Semantics for Bayesian Machine Learning
Logical Methods in Computer Science
computer science - logic in computer science
computer science - artificial intelligence
computer science - programming languages
title Measure Transformer Semantics for Bayesian Machine Learning
title_full Measure Transformer Semantics for Bayesian Machine Learning
title_fullStr Measure Transformer Semantics for Bayesian Machine Learning
title_full_unstemmed Measure Transformer Semantics for Bayesian Machine Learning
title_short Measure Transformer Semantics for Bayesian Machine Learning
title_sort measure transformer semantics for bayesian machine learning
topic computer science - logic in computer science
computer science - artificial intelligence
computer science - programming languages
url https://lmcs.episciences.org/815/pdf
work_keys_str_mv AT johannesborgstrom measuretransformersemanticsforbayesianmachinelearning
AT andrewdgordon measuretransformersemanticsforbayesianmachinelearning
AT michaelgreenberg measuretransformersemanticsforbayesianmachinelearning
AT jamesmargetson measuretransformersemanticsforbayesianmachinelearning
AT jurgenvangael measuretransformersemanticsforbayesianmachinelearning