Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters
Maximum a posteriori estimation (MAP) with Dirichlet prior has been shown to be effective in improving the parameter learning of Bayesian networks when the available data are insufficient. Given no extra domain knowledge, uniform prior is often considered for regularization. However, when the underl...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-09-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/23/10/1283 |
_version_ | 1797514636838305792 |
---|---|
author | Ruohai Di Peng Wang Chuchao He Zhigao Guo |
author_facet | Ruohai Di Peng Wang Chuchao He Zhigao Guo |
author_sort | Ruohai Di |
collection | DOAJ |
description | Maximum a posteriori estimation (MAP) with Dirichlet prior has been shown to be effective in improving the parameter learning of Bayesian networks when the available data are insufficient. Given no extra domain knowledge, uniform prior is often considered for regularization. However, when the underlying parameter distribution is non-uniform or skewed, uniform prior does not work well, and a more informative prior is required. In reality, unless the domain experts are extremely unfamiliar with the network, they would be able to provide some reliable knowledge on the studied network. With that knowledge, we can automatically refine informative priors and select reasonable equivalent sample size (ESS). In this paper, considering the parameter constraints that are transformed from the domain knowledge, we propose a Constrained adjusted Maximum a Posteriori (CaMAP) estimation method, which is featured by two novel techniques. First, to draw an informative prior distribution (or prior shape), we present a novel sampling method that can construct the prior distribution from the constraints. Then, to find the optimal ESS (or prior strength), we derive constraints on the ESS from the parameter constraints and select the optimal ESS by cross-validation. Numerical experiments show that the proposed method is superior to other learning algorithms. |
first_indexed | 2024-03-10T06:34:24Z |
format | Article |
id | doaj.art-25f68ba578464c0e885be2a19bdefe3c |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-10T06:34:24Z |
publishDate | 2021-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-25f68ba578464c0e885be2a19bdefe3c2023-11-22T18:10:37ZengMDPI AGEntropy1099-43002021-09-012310128310.3390/e23101283Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network ParametersRuohai Di0Peng Wang1Chuchao He2Zhigao Guo3School of Electronics and Information Engineering, Xi’an Technological University, Xi’an 710021, ChinaSchool of Electronics and Information Engineering, Xi’an Technological University, Xi’an 710021, ChinaSchool of Electronics and Information Engineering, Xi’an Technological University, Xi’an 710021, ChinaSchool of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UKMaximum a posteriori estimation (MAP) with Dirichlet prior has been shown to be effective in improving the parameter learning of Bayesian networks when the available data are insufficient. Given no extra domain knowledge, uniform prior is often considered for regularization. However, when the underlying parameter distribution is non-uniform or skewed, uniform prior does not work well, and a more informative prior is required. In reality, unless the domain experts are extremely unfamiliar with the network, they would be able to provide some reliable knowledge on the studied network. With that knowledge, we can automatically refine informative priors and select reasonable equivalent sample size (ESS). In this paper, considering the parameter constraints that are transformed from the domain knowledge, we propose a Constrained adjusted Maximum a Posteriori (CaMAP) estimation method, which is featured by two novel techniques. First, to draw an informative prior distribution (or prior shape), we present a novel sampling method that can construct the prior distribution from the constraints. Then, to find the optimal ESS (or prior strength), we derive constraints on the ESS from the parameter constraints and select the optimal ESS by cross-validation. Numerical experiments show that the proposed method is superior to other learning algorithms.https://www.mdpi.com/1099-4300/23/10/1283graphical modelsdomain knowledgeprior distributionequivalent sample sizeparameter constraints |
spellingShingle | Ruohai Di Peng Wang Chuchao He Zhigao Guo Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters Entropy graphical models domain knowledge prior distribution equivalent sample size parameter constraints |
title | Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters |
title_full | Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters |
title_fullStr | Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters |
title_full_unstemmed | Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters |
title_short | Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters |
title_sort | constrained adjusted maximum a posteriori estimation of bayesian network parameters |
topic | graphical models domain knowledge prior distribution equivalent sample size parameter constraints |
url | https://www.mdpi.com/1099-4300/23/10/1283 |
work_keys_str_mv | AT ruohaidi constrainedadjustedmaximumaposterioriestimationofbayesiannetworkparameters AT pengwang constrainedadjustedmaximumaposterioriestimationofbayesiannetworkparameters AT chuchaohe constrainedadjustedmaximumaposterioriestimationofbayesiannetworkparameters AT zhigaoguo constrainedadjustedmaximumaposterioriestimationofbayesiannetworkparameters |