Summary: | Bayesian statistical inference loses predictive optimality when generative
models are misspecified.
Working within an existing coherent loss-based generalisation of Bayesian
inference, we show existing Modular/Cut-model inference is coherent, and write
down a new family of Semi-Modular Inference (SMI) schemes, indexed by an
influence parameter, with Bayesian inference and Cut-models as special cases.
We give a meta-learning criterion and estimation procedure to choose the
inference scheme. This returns Bayesian inference when there is no
misspecification.
The framework applies naturally to Multi-modular models. Cut-model inference
allows directed information flow from well-specified modules to misspecified
modules, but not vice versa. An existing alternative power posterior method
gives tunable but undirected control of information flow, improving prediction
in some settings. In contrast, SMI allows tunable and directed information flow
between modules.
We illustrate our methods on two standard test cases from the literature and
a motivating archaeological data set.
|