iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning

Remarkable progress has been made within nonlinear Independent Component Analysis (ICA) and identifiable deep latent variable models. Formally, the latest nonlinear ICA theory enables us to recover the true latent variables up to a linear transformation by leveraging unsupervised deep learning. This...

Full description

Bibliographic Details
Main Authors: Bjorn Uttrup Dideriksen, Kristoffer Derosche, Zheng-Hua Tan
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9766338/
_version_ 1817985793047658496
author Bjorn Uttrup Dideriksen
Kristoffer Derosche
Zheng-Hua Tan
author_facet Bjorn Uttrup Dideriksen
Kristoffer Derosche
Zheng-Hua Tan
author_sort Bjorn Uttrup Dideriksen
collection DOAJ
description Remarkable progress has been made within nonlinear Independent Component Analysis (ICA) and identifiable deep latent variable models. Formally, the latest nonlinear ICA theory enables us to recover the true latent variables up to a linear transformation by leveraging unsupervised deep learning. This is of significant importance for unsupervised learning in general as the true latent variables are of principal interest for meaningful representations. These theoretical results stand in stark contrast to the mostly heuristic approaches used for representation learning which do not provide analytical relations to the true latent variables. We extend the family of identifiable models by proposing an identifiable Variational Autoencoder (VAE) based Generative Adversarial Network (GAN) model we name iVAE-GAN. The latent space of most GANs, including VAE-GAN, is generally unrelated to the true latent variables. With iVAE-GAN we show the first principal approach to a theoretically meaningful latent space by means of adversarial training. We implement the novel iVAE-GAN architecture and show its identifiability, which is confirmed by experiments. The GAN objective is believed to be an important addition to identifiable models as it is one of the most powerful deep generative models. Furthermore, no requirements are imposed on the adversarial training leading to a very general model.
first_indexed 2024-04-14T00:02:20Z
format Article
id doaj.art-a6a90a74deb04008a264cae8a5cdba3a
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-14T00:02:20Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-a6a90a74deb04008a264cae8a5cdba3a2022-12-22T02:23:40ZengIEEEIEEE Access2169-35362022-01-0110484054841810.1109/ACCESS.2022.31723339766338iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation LearningBjorn Uttrup Dideriksen0https://orcid.org/0000-0003-3042-4149Kristoffer Derosche1Zheng-Hua Tan2https://orcid.org/0000-0001-6856-8928Department of Electronic Systems, Aalborg University, Aalborg, DenmarkDepartment of Electronic Systems, Aalborg University, Aalborg, DenmarkDepartment of Electronic Systems, Aalborg University, Aalborg, DenmarkRemarkable progress has been made within nonlinear Independent Component Analysis (ICA) and identifiable deep latent variable models. Formally, the latest nonlinear ICA theory enables us to recover the true latent variables up to a linear transformation by leveraging unsupervised deep learning. This is of significant importance for unsupervised learning in general as the true latent variables are of principal interest for meaningful representations. These theoretical results stand in stark contrast to the mostly heuristic approaches used for representation learning which do not provide analytical relations to the true latent variables. We extend the family of identifiable models by proposing an identifiable Variational Autoencoder (VAE) based Generative Adversarial Network (GAN) model we name iVAE-GAN. The latent space of most GANs, including VAE-GAN, is generally unrelated to the true latent variables. With iVAE-GAN we show the first principal approach to a theoretically meaningful latent space by means of adversarial training. We implement the novel iVAE-GAN architecture and show its identifiability, which is confirmed by experiments. The GAN objective is believed to be an important addition to identifiable models as it is one of the most powerful deep generative models. Furthermore, no requirements are imposed on the adversarial training leading to a very general model.https://ieeexplore.ieee.org/document/9766338/IdentifiabilityVAE-GANdeep learninglatent representation learning
spellingShingle Bjorn Uttrup Dideriksen
Kristoffer Derosche
Zheng-Hua Tan
iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning
IEEE Access
Identifiability
VAE-GAN
deep learning
latent representation learning
title iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning
title_full iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning
title_fullStr iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning
title_full_unstemmed iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning
title_short iVAE-GAN: Identifiable VAE-GAN Models for Latent Representation Learning
title_sort ivae gan identifiable vae gan models for latent representation learning
topic Identifiability
VAE-GAN
deep learning
latent representation learning
url https://ieeexplore.ieee.org/document/9766338/
work_keys_str_mv AT bjornuttrupdideriksen ivaeganidentifiablevaeganmodelsforlatentrepresentationlearning
AT kristofferderosche ivaeganidentifiablevaeganmodelsforlatentrepresentationlearning
AT zhenghuatan ivaeganidentifiablevaeganmodelsforlatentrepresentationlearning