Using deep LSD to build operators in GANs latent space with meaning in real space

Generative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the re...

Full description

Bibliographic Details
Main Authors: J. Quetzalcóatl Toledo-Marín, James A. Glazier
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2023-01-01
Series:PLoS ONE
Online Access:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10309997/?tool=EBI
_version_ 1797788227428417536
author J. Quetzalcóatl Toledo-Marín
James A. Glazier
author_facet J. Quetzalcóatl Toledo-Marín
James A. Glazier
author_sort J. Quetzalcóatl Toledo-Marín
collection DOAJ
description Generative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the real-space representation. Many types of generative model are used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs). Based on the idea that the latent space behaves like a vector space Radford et al. (2015), we ask whether we can expand the latent space representation of our data elements in terms of an orthonormal basis set. Here we propose a method to build a set of linearly independent vectors in the latent space of a trained GAN, which we call quasi-eigenvectors. These quasi-eigenvectors have two key properties: i) They span the latent space, ii) A set of these quasi-eigenvectors map to each of the labeled features one-to-one. We show that in the case of the MNIST image data set, while the number of dimensions in latent space is large by design, 98% of the data in real space map to a sub-domain of latent space of dimensionality equal to the number of labels. We then show how the quasi-eigenvectors can be used for Latent Spectral Decomposition (LSD). We apply LSD to denoise MNIST images. Finally, using the quasi-eigenvectors, we construct rotation matrices in latent space which map to feature transformations in real space. Overall, from quasi-eigenvectors we gain insight regarding the latent space topology.
first_indexed 2024-03-13T01:32:37Z
format Article
id doaj.art-055e1bfb4fb049b3b955a444a65dc9a8
institution Directory Open Access Journal
issn 1932-6203
language English
last_indexed 2024-03-13T01:32:37Z
publishDate 2023-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj.art-055e1bfb4fb049b3b955a444a65dc9a82023-07-04T05:32:55ZengPublic Library of Science (PLoS)PLoS ONE1932-62032023-01-01186Using deep LSD to build operators in GANs latent space with meaning in real spaceJ. Quetzalcóatl Toledo-MarínJames A. GlazierGenerative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the real-space representation. Many types of generative model are used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs). Based on the idea that the latent space behaves like a vector space Radford et al. (2015), we ask whether we can expand the latent space representation of our data elements in terms of an orthonormal basis set. Here we propose a method to build a set of linearly independent vectors in the latent space of a trained GAN, which we call quasi-eigenvectors. These quasi-eigenvectors have two key properties: i) They span the latent space, ii) A set of these quasi-eigenvectors map to each of the labeled features one-to-one. We show that in the case of the MNIST image data set, while the number of dimensions in latent space is large by design, 98% of the data in real space map to a sub-domain of latent space of dimensionality equal to the number of labels. We then show how the quasi-eigenvectors can be used for Latent Spectral Decomposition (LSD). We apply LSD to denoise MNIST images. Finally, using the quasi-eigenvectors, we construct rotation matrices in latent space which map to feature transformations in real space. Overall, from quasi-eigenvectors we gain insight regarding the latent space topology.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10309997/?tool=EBI
spellingShingle J. Quetzalcóatl Toledo-Marín
James A. Glazier
Using deep LSD to build operators in GANs latent space with meaning in real space
PLoS ONE
title Using deep LSD to build operators in GANs latent space with meaning in real space
title_full Using deep LSD to build operators in GANs latent space with meaning in real space
title_fullStr Using deep LSD to build operators in GANs latent space with meaning in real space
title_full_unstemmed Using deep LSD to build operators in GANs latent space with meaning in real space
title_short Using deep LSD to build operators in GANs latent space with meaning in real space
title_sort using deep lsd to build operators in gans latent space with meaning in real space
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10309997/?tool=EBI
work_keys_str_mv AT jquetzalcoatltoledomarin usingdeeplsdtobuildoperatorsinganslatentspacewithmeaninginrealspace
AT jamesaglazier usingdeeplsdtobuildoperatorsinganslatentspacewithmeaninginrealspace