Hierarchical Mixtures of Experts and the EM Algorithm

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum l...

Full description

Bibliographic Details
Main Authors: Jordan, Michael I., Jacobs, Robert A.
Language:en_US
Published: 2004
Subjects:
Online Access:http://hdl.handle.net/1721.1/7206