Learning bayesian network structure using lp relaxations
We propose to solve the combinatorial problem of finding the highest scoring Bayesian network structure from data. This structure learning problem can be viewed as an inference problem where the variables specify the choice of parents for each node in the graph. The key combinatorial difficult...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Society for Artificial Intelligence and Statistics
2011
|
Online Access: | http://hdl.handle.net/1721.1/66317 https://orcid.org/0000-0002-2199-0379 |
_version_ | 1826214458021642240 |
---|---|
author | Jaakkola, Tommi S. Sontag, David Alexander Globerson, Amir Meila, Marina |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Jaakkola, Tommi S. Sontag, David Alexander Globerson, Amir Meila, Marina |
author_sort | Jaakkola, Tommi S. |
collection | MIT |
description | We propose to solve the combinatorial problem
of finding the highest scoring Bayesian
network structure from data. This structure
learning problem can be viewed as an inference
problem where the variables specify the
choice of parents for each node in the graph.
The key combinatorial difficulty arises from
the global constraint that the graph structure
has to be acyclic. We cast the structure
learning problem as a linear program over
the polytope defined by valid acyclic structures.
In relaxing this problem, we maintain
an outer bound approximation to the polytope
and iteratively tighten it by searching
over a new class of valid constraints. If an
integral solution is found, it is guaranteed
to be the optimal Bayesian network. When
the relaxation is not tight, the fast dual algorithms
we develop remain useful in combination
with a branch and bound method.
Empirical results suggest that the method is
competitive or faster than alternative exact
methods based on dynamic programming. |
first_indexed | 2024-09-23T16:05:40Z |
format | Article |
id | mit-1721.1/66317 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T16:05:40Z |
publishDate | 2011 |
publisher | Society for Artificial Intelligence and Statistics |
record_format | dspace |
spelling | mit-1721.1/663172022-09-29T18:10:13Z Learning bayesian network structure using lp relaxations Jaakkola, Tommi S. Sontag, David Alexander Globerson, Amir Meila, Marina Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Jaakkola, Tommi S. Jaakkola, Tommi S. Sontag, David Alexander We propose to solve the combinatorial problem of finding the highest scoring Bayesian network structure from data. This structure learning problem can be viewed as an inference problem where the variables specify the choice of parents for each node in the graph. The key combinatorial difficulty arises from the global constraint that the graph structure has to be acyclic. We cast the structure learning problem as a linear program over the polytope defined by valid acyclic structures. In relaxing this problem, we maintain an outer bound approximation to the polytope and iteratively tighten it by searching over a new class of valid constraints. If an integral solution is found, it is guaranteed to be the optimal Bayesian network. When the relaxation is not tight, the fast dual algorithms we develop remain useful in combination with a branch and bound method. Empirical results suggest that the method is competitive or faster than alternative exact methods based on dynamic programming. 2011-10-17T20:46:45Z 2011-10-17T20:46:45Z 2010-05 Article http://purl.org/eprint/type/ConferencePaper http://hdl.handle.net/1721.1/66317 Jaakkola, Tommi, David Sontag, Amir Globerson, and Marina Meila. "Learning Bayesian Network Structure using LP Relaxations." Proceedings of the 13th International Conference on Arti ficial Intelligence and Statistics (AISTATS) 2010, May 13-15, Chia Laguna Resort, Sardinia, Italy. https://orcid.org/0000-0002-2199-0379 en_US http://jmlr.csail.mit.edu/proceedings/papers/v9/jaakkola10a/jaakkola10a.pdf Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, (AISTATS) 2010 Creative Commons Attribution-Noncommercial-Share Alike 3.0 http://creativecommons.org/licenses/by-nc-sa/3.0/ application/pdf Society for Artificial Intelligence and Statistics MIT web domain |
spellingShingle | Jaakkola, Tommi S. Sontag, David Alexander Globerson, Amir Meila, Marina Learning bayesian network structure using lp relaxations |
title | Learning bayesian network structure using lp relaxations |
title_full | Learning bayesian network structure using lp relaxations |
title_fullStr | Learning bayesian network structure using lp relaxations |
title_full_unstemmed | Learning bayesian network structure using lp relaxations |
title_short | Learning bayesian network structure using lp relaxations |
title_sort | learning bayesian network structure using lp relaxations |
url | http://hdl.handle.net/1721.1/66317 https://orcid.org/0000-0002-2199-0379 |
work_keys_str_mv | AT jaakkolatommis learningbayesiannetworkstructureusinglprelaxations AT sontagdavidalexander learningbayesiannetworkstructureusinglprelaxations AT globersonamir learningbayesiannetworkstructureusinglprelaxations AT meilamarina learningbayesiannetworkstructureusinglprelaxations |