Auto-encoding sequential Monte Carlo
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for perfor...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Published: |
OpenReview
2018
|
_version_ | 1797063811110273024 |
---|---|
author | Le, T Igl, M Rainforth, T Jin, T Wood, F |
author_facet | Le, T Igl, M Rainforth, T Jin, T Wood, F |
author_sort | Le, T |
collection | OXFORD |
description | We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models. |
first_indexed | 2024-03-06T21:05:15Z |
format | Conference item |
id | oxford-uuid:3c35d146-4402-40d4-ae1c-1e1c3d09b17a |
institution | University of Oxford |
last_indexed | 2024-03-06T21:05:15Z |
publishDate | 2018 |
publisher | OpenReview |
record_format | dspace |
spelling | oxford-uuid:3c35d146-4402-40d4-ae1c-1e1c3d09b17a2022-03-26T14:12:21ZAuto-encoding sequential Monte CarloConference itemhttp://purl.org/coar/resource_type/c_5794uuid:3c35d146-4402-40d4-ae1c-1e1c3d09b17aSymplectic Elements at OxfordOpenReview2018Le, TIgl, MRainforth, TJin, TWood, FWe build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models. |
spellingShingle | Le, T Igl, M Rainforth, T Jin, T Wood, F Auto-encoding sequential Monte Carlo |
title | Auto-encoding sequential Monte Carlo |
title_full | Auto-encoding sequential Monte Carlo |
title_fullStr | Auto-encoding sequential Monte Carlo |
title_full_unstemmed | Auto-encoding sequential Monte Carlo |
title_short | Auto-encoding sequential Monte Carlo |
title_sort | auto encoding sequential monte carlo |
work_keys_str_mv | AT let autoencodingsequentialmontecarlo AT iglm autoencodingsequentialmontecarlo AT rainfortht autoencodingsequentialmontecarlo AT jint autoencodingsequentialmontecarlo AT woodf autoencodingsequentialmontecarlo |