Bayesian Inversion by ω-complete cone duality

The process of inverting Markov kernels relates to the important subject of Bayesian modelling and learning. In fact, Bayesian update is exactly kernel inversion. In this paper, we investigate how and when Markov kernels (aka stochastic relations, or probabilistic mappings, or simply kernels) can be...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Dahlqvist, F, Danos, V, Garnier, I, Kammar, O
Định dạng: Conference item
Được phát hành: Schloss Dagstuhl - Leibniz-Zentrum für Informatik 2016
_version_ 1826301022205640704
author Dahlqvist, F
Danos, V
Garnier, I
Kammar, O
author_facet Dahlqvist, F
Danos, V
Garnier, I
Kammar, O
author_sort Dahlqvist, F
collection OXFORD
description The process of inverting Markov kernels relates to the important subject of Bayesian modelling and learning. In fact, Bayesian update is exactly kernel inversion. In this paper, we investigate how and when Markov kernels (aka stochastic relations, or probabilistic mappings, or simply kernels) can be inverted. We address the question both directly on the category of measurable spaces, and indirectly by interpreting kernels as Markov operators: For the direct option, we introduce a typed version of the category of Markov kernels and use the so-called 'disintegration of measures'. Here, one has to specialise to measurable spaces borne from a simple class of topological spaces -e.g. Polish spaces (other choices are possible). Our method and result greatly simplify a recent development in Ref. [4]. For the operator option, we use a cone version of the category of Markov operators (kernels seen as predicate transformers). That is to say, our linear operators are not just continuous, but are required to satisfy the stronger condition of being ω-chain-continuous. 1 Prior work shows that one obtains an adjunction in the form of a pair of contravariant and inverse functors between the categories of L 1 - and L ∞ -cones [3]. Inversion, seen through the operator prism, is just adjunction.2 No topological assumption is needed. We show that both categories (Markov kernels and ω-chain-continuous Markov operators) are related by a family of contravariant functors T p for 1 ≤ p ≤∞. The T p 's are Kleisli extensions of (duals of) conditional expectation functors introduced in Ref. [3]. With this bridge in place, we can prove that both notions of inversion agree when both defined: if f is a kernel, and f † its direct inverse, then T ∞ (f) † = T ∞ (f † ).
first_indexed 2024-03-07T05:26:03Z
format Conference item
id oxford-uuid:e097d26f-07f5-4bf0-9e27-c348dfe51fb9
institution University of Oxford
last_indexed 2024-03-07T05:26:03Z
publishDate 2016
publisher Schloss Dagstuhl - Leibniz-Zentrum für Informatik
record_format dspace
spelling oxford-uuid:e097d26f-07f5-4bf0-9e27-c348dfe51fb92022-03-27T09:48:25ZBayesian Inversion by ω-complete cone dualityConference itemhttp://purl.org/coar/resource_type/c_5794uuid:e097d26f-07f5-4bf0-9e27-c348dfe51fb9Symplectic Elements at OxfordSchloss Dagstuhl - Leibniz-Zentrum für Informatik2016Dahlqvist, FDanos, VGarnier, IKammar, OThe process of inverting Markov kernels relates to the important subject of Bayesian modelling and learning. In fact, Bayesian update is exactly kernel inversion. In this paper, we investigate how and when Markov kernels (aka stochastic relations, or probabilistic mappings, or simply kernels) can be inverted. We address the question both directly on the category of measurable spaces, and indirectly by interpreting kernels as Markov operators: For the direct option, we introduce a typed version of the category of Markov kernels and use the so-called 'disintegration of measures'. Here, one has to specialise to measurable spaces borne from a simple class of topological spaces -e.g. Polish spaces (other choices are possible). Our method and result greatly simplify a recent development in Ref. [4]. For the operator option, we use a cone version of the category of Markov operators (kernels seen as predicate transformers). That is to say, our linear operators are not just continuous, but are required to satisfy the stronger condition of being ω-chain-continuous. 1 Prior work shows that one obtains an adjunction in the form of a pair of contravariant and inverse functors between the categories of L 1 - and L ∞ -cones [3]. Inversion, seen through the operator prism, is just adjunction.2 No topological assumption is needed. We show that both categories (Markov kernels and ω-chain-continuous Markov operators) are related by a family of contravariant functors T p for 1 ≤ p ≤∞. The T p 's are Kleisli extensions of (duals of) conditional expectation functors introduced in Ref. [3]. With this bridge in place, we can prove that both notions of inversion agree when both defined: if f is a kernel, and f † its direct inverse, then T ∞ (f) † = T ∞ (f † ).
spellingShingle Dahlqvist, F
Danos, V
Garnier, I
Kammar, O
Bayesian Inversion by ω-complete cone duality
title Bayesian Inversion by ω-complete cone duality
title_full Bayesian Inversion by ω-complete cone duality
title_fullStr Bayesian Inversion by ω-complete cone duality
title_full_unstemmed Bayesian Inversion by ω-complete cone duality
title_short Bayesian Inversion by ω-complete cone duality
title_sort bayesian inversion by ω complete cone duality
work_keys_str_mv AT dahlqvistf bayesianinversionbyōcompleteconeduality
AT danosv bayesianinversionbyōcompleteconeduality
AT garnieri bayesianinversionbyōcompleteconeduality
AT kammaro bayesianinversionbyōcompleteconeduality