Summary: | The ever increasing use of intelligent multi-agent systems poses increasing demands upon them. One of these is the ability to reason consistently under uncertainty. This, in turn, is the dominant characteristic of probabilistic learning in graphical models which, however, lack a natural decentralised formulation. The ideal would, therefore, be a unifying framework which is able to combine the strengths of both multi-agent and probabilistic inference In this paper we present a unified interpretation of the inference mechanisms in games and graphical models. In particular, we view fictitious play as a method of optimising the Kullback-Leibler distance between current mixed strategies and optimal mixed strategies at Nash equilibrium. In reverse, probabilistic inference in the variational mean-field framework can be viewed as fictitious game play to learn the best strategies which explain a probabilistic graphical model. © 2005 IEEE.
|