Summary: | Of numerous proposals for weakening the attribute independence assumption of Naive Bayes, averaged one-dependence estimators (AODE) learns by extrapolation from marginal to full-multivariate probability distributions, and has demonstrated reasonable improvement in terms of classification performance. However, all the one-dependence estimators in AODE are assigned with the same weight, and their probability estimates are combined linearly. This work presents an efficient and effective attribute value weighting approach that assigns discriminative weights to different super-parent one-dependence estimators for different instances by identifying the differences among these one-dependence estimators in terms of log likelihood. The proposed approach is validated on widely used benchmark datasets from UCI machine learning repository. Experimental results show that the proposed approach achieves bias-variance trade-off and is a competitive alternative to state-of-the-art Bayesian and non-Bayesian learners (e.g., tree augmented Naive Bayes and logistic regression).
|