Maximum-Entropy Priors with Derived Parameters in a Specified Distribution
We propose a method for transforming probability distributions so that parameters of interest are forced into a specified distribution. We prove that this approach is the maximum-entropy choice, and provide a motivating example, applicable to neutrino-hierarchy inference.
Main Authors: | Will Handley, Marius Millea |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-03-01
|
Series: | Entropy |
Subjects: | |
Online Access: | http://www.mdpi.com/1099-4300/21/3/272 |
Similar Items
-
The Prior Can Often Only Be Understood in the Context of the Likelihood
by: Andrew Gelman, et al.
Published: (2017-10-01) -
Bayesian Inference in Auditing with Partial Prior Information Using Maximum Entropy Priors
by: María Martel-Escobar, et al.
Published: (2018-12-01) -
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
by: Steven H. Waldrip, et al.
Published: (2017-02-01) -
Entropy, Information, and the Updating of Probabilities
by: Ariel Caticha
Published: (2021-07-01) -
Bayesian Inference under Small Sample Sizes Using General Noninformative Priors
by: Jingjing He, et al.
Published: (2021-11-01)