Learning efficiently with approximate inference via dual losses
Many structured prediction tasks involve complex models where inference is computationally intractable, but where it can be well approximated using a linear programming relaxation. Previous approaches for learning for structured prediction (e.g., cutting- plane, subgradient methods, perceptron)...
Main Authors: | Meshi, Ofer, Sontag, David Alexander, Jaakkola, Tommi S., Globerson, Amir |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | en_US |
Published: |
International Machine Learning Society
2011
|
Online Access: | http://hdl.handle.net/1721.1/62851 https://orcid.org/0000-0002-2199-0379 |
Similar Items
-
More data means less inference: A pseudo-max approach to structured learning
by: Sontag, David, et al.
Published: (2011) -
Convergence Rate Analysis of MAP Coordinate Minimization Algorithms
by: Meshi, Ofer, et al.
Published: (2022) -
Convergence Rate Analysis of MAP Coordinate Minimization Algorithms
by: Meshi, Ofer, et al.
Published: (2021) -
Learning bayesian network structure using lp relaxations
by: Jaakkola, Tommi S., et al.
Published: (2011) -
Approximate inference in graphical models using LP relaxations
by: Sontag, David Alexander
Published: (2011)