Methods and Experiments With Bounded Tree-width Markov Networks
Markov trees generalize naturally to bounded tree-width Markov networks, onwhich exact computations can still be done efficiently. However, learning themaximum likelihood Markov network with tree-width greater than 1 is NP-hard, sowe discuss a few algorithms for approximating the optimal Markov net...
Main Authors: | , |
---|---|
Language: | en_US |
Published: |
2005
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/30511 |
_version_ | 1811092421061115904 |
---|---|
author | Liang, Percy Srebro, Nathan |
author_facet | Liang, Percy Srebro, Nathan |
author_sort | Liang, Percy |
collection | MIT |
description | Markov trees generalize naturally to bounded tree-width Markov networks, onwhich exact computations can still be done efficiently. However, learning themaximum likelihood Markov network with tree-width greater than 1 is NP-hard, sowe discuss a few algorithms for approximating the optimal Markov network. Wepresent a set of methods for training a density estimator. Each method isspecified by three arguments: tree-width, model scoring metric (maximumlikelihood or minimum description length), and model representation (using onejoint distribution or several class-conditional distributions). On thesemethods, we give empirical results on density estimation and classificationtasks and explore the implications of these arguments. |
first_indexed | 2024-09-23T15:17:54Z |
id | mit-1721.1/30511 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T15:17:54Z |
publishDate | 2005 |
record_format | dspace |
spelling | mit-1721.1/305112019-04-11T04:57:51Z Methods and Experiments With Bounded Tree-width Markov Networks Liang, Percy Srebro, Nathan AI tree-width hypertrees Markov Networks maximum likelihood MDL Markov trees generalize naturally to bounded tree-width Markov networks, onwhich exact computations can still be done efficiently. However, learning themaximum likelihood Markov network with tree-width greater than 1 is NP-hard, sowe discuss a few algorithms for approximating the optimal Markov network. Wepresent a set of methods for training a density estimator. Each method isspecified by three arguments: tree-width, model scoring metric (maximumlikelihood or minimum description length), and model representation (using onejoint distribution or several class-conditional distributions). On thesemethods, we give empirical results on density estimation and classificationtasks and explore the implications of these arguments. 2005-12-22T02:19:52Z 2005-12-22T02:19:52Z 2004-12-30 MIT-CSAIL-TR-2004-081 AIM-2004-030 http://hdl.handle.net/1721.1/30511 en_US Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory 10 p. 10714507 bytes 473643 bytes application/postscript application/pdf application/postscript application/pdf |
spellingShingle | AI tree-width hypertrees Markov Networks maximum likelihood MDL Liang, Percy Srebro, Nathan Methods and Experiments With Bounded Tree-width Markov Networks |
title | Methods and Experiments With Bounded Tree-width Markov Networks |
title_full | Methods and Experiments With Bounded Tree-width Markov Networks |
title_fullStr | Methods and Experiments With Bounded Tree-width Markov Networks |
title_full_unstemmed | Methods and Experiments With Bounded Tree-width Markov Networks |
title_short | Methods and Experiments With Bounded Tree-width Markov Networks |
title_sort | methods and experiments with bounded tree width markov networks |
topic | AI tree-width hypertrees Markov Networks maximum likelihood MDL |
url | http://hdl.handle.net/1721.1/30511 |
work_keys_str_mv | AT liangpercy methodsandexperimentswithboundedtreewidthmarkovnetworks AT srebronathan methodsandexperimentswithboundedtreewidthmarkovnetworks |