Factorial Hidden Markov Models
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules,...
Main Authors: | , |
---|---|
Language: | en_US |
Published: |
2004
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/7188 |
_version_ | 1826201854813405184 |
---|---|
author | Ghahramani, Zoubin Jordan, Michael I. |
author_facet | Ghahramani, Zoubin Jordan, Michael I. |
author_sort | Ghahramani, Zoubin |
collection | MIT |
description | We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm. |
first_indexed | 2024-09-23T11:58:15Z |
id | mit-1721.1/7188 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T11:58:15Z |
publishDate | 2004 |
record_format | dspace |
spelling | mit-1721.1/71882019-04-12T08:34:03Z Factorial Hidden Markov Models Ghahramani, Zoubin Jordan, Michael I. AI MIT Artificial Intelligence Hidden Markov Models sNeural networks Time series Mean field theory Gibbs sampling sFactorial Learning algorithms Machine learning We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm. 2004-10-20T20:49:14Z 2004-10-20T20:49:14Z 1996-02-09 AIM-1561 CBCL-130 http://hdl.handle.net/1721.1/7188 en_US AIM-1561 CBCL-130 7 p. 198365 bytes 244196 bytes application/postscript application/pdf application/postscript application/pdf |
spellingShingle | AI MIT Artificial Intelligence Hidden Markov Models sNeural networks Time series Mean field theory Gibbs sampling sFactorial Learning algorithms Machine learning Ghahramani, Zoubin Jordan, Michael I. Factorial Hidden Markov Models |
title | Factorial Hidden Markov Models |
title_full | Factorial Hidden Markov Models |
title_fullStr | Factorial Hidden Markov Models |
title_full_unstemmed | Factorial Hidden Markov Models |
title_short | Factorial Hidden Markov Models |
title_sort | factorial hidden markov models |
topic | AI MIT Artificial Intelligence Hidden Markov Models sNeural networks Time series Mean field theory Gibbs sampling sFactorial Learning algorithms Machine learning |
url | http://hdl.handle.net/1721.1/7188 |
work_keys_str_mv | AT ghahramanizoubin factorialhiddenmarkovmodels AT jordanmichaeli factorialhiddenmarkovmodels |