Monte Carlo variational auto-encoders

Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO). To obtain tighter ELBO and hence better variational approximations, it has been proposed to use importance sampling to get a lower variance estimate of the evidence...

Full description

Bibliographic Details
Main Authors: Thin, A, Kotelevskii, N, Durmus, A, Panov, M, Moulines, E, Doucet, A
Format: Conference item
Language:English
Published: Journal of Machine Learning Research 2021
_version_ 1797108379420721152
author Thin, A
Kotelevskii, N
Durmus, A
Panov, M
Moulines, E
Doucet, A
author_facet Thin, A
Kotelevskii, N
Durmus, A
Panov, M
Moulines, E
Doucet, A
author_sort Thin, A
collection OXFORD
description Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO). To obtain tighter ELBO and hence better variational approximations, it has been proposed to use importance sampling to get a lower variance estimate of the evidence. However, importance sampling is known to perform poorly in high dimensions. While it has been suggested many times in the literature to use more sophisticated algorithms such as Annealed Importance Sampling (AIS) and its Sequential Importance Sampling (SIS) extensions, the potential benefits brought by these advanced techniques have never been realized for VAE: the AIS estimate cannot be easily differentiated, while SIS requires the specification of carefully chosen backward Markov kernels. In this paper, we address both issues and demonstrate the performance of the resulting Monte Carlo VAEs on a variety of applications.
first_indexed 2024-03-07T07:28:21Z
format Conference item
id oxford-uuid:13695908-f612-4c3e-bdc6-f0c425b86709
institution University of Oxford
language English
last_indexed 2024-03-07T07:28:21Z
publishDate 2021
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:13695908-f612-4c3e-bdc6-f0c425b867092022-12-13T15:53:16ZMonte Carlo variational auto-encodersConference itemhttp://purl.org/coar/resource_type/c_5794uuid:13695908-f612-4c3e-bdc6-f0c425b86709EnglishSymplectic Elements Journal of Machine Learning Research2021Thin, AKotelevskii, NDurmus, APanov, MMoulines, EDoucet, AVariational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO). To obtain tighter ELBO and hence better variational approximations, it has been proposed to use importance sampling to get a lower variance estimate of the evidence. However, importance sampling is known to perform poorly in high dimensions. While it has been suggested many times in the literature to use more sophisticated algorithms such as Annealed Importance Sampling (AIS) and its Sequential Importance Sampling (SIS) extensions, the potential benefits brought by these advanced techniques have never been realized for VAE: the AIS estimate cannot be easily differentiated, while SIS requires the specification of carefully chosen backward Markov kernels. In this paper, we address both issues and demonstrate the performance of the resulting Monte Carlo VAEs on a variety of applications.
spellingShingle Thin, A
Kotelevskii, N
Durmus, A
Panov, M
Moulines, E
Doucet, A
Monte Carlo variational auto-encoders
title Monte Carlo variational auto-encoders
title_full Monte Carlo variational auto-encoders
title_fullStr Monte Carlo variational auto-encoders
title_full_unstemmed Monte Carlo variational auto-encoders
title_short Monte Carlo variational auto-encoders
title_sort monte carlo variational auto encoders
work_keys_str_mv AT thina montecarlovariationalautoencoders
AT kotelevskiin montecarlovariationalautoencoders
AT durmusa montecarlovariationalautoencoders
AT panovm montecarlovariationalautoencoders
AT moulinese montecarlovariationalautoencoders
AT douceta montecarlovariationalautoencoders