Using deep LSD to build operators in GANs latent space with meaning in real space.
Generative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the re...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2023-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0287736 |
_version_ | 1797784122044710912 |
---|---|
author | J Quetzalcóatl Toledo-Marín James A Glazier |
author_facet | J Quetzalcóatl Toledo-Marín James A Glazier |
author_sort | J Quetzalcóatl Toledo-Marín |
collection | DOAJ |
description | Generative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the real-space representation. Many types of generative model are used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs). Based on the idea that the latent space behaves like a vector space Radford et al. (2015), we ask whether we can expand the latent space representation of our data elements in terms of an orthonormal basis set. Here we propose a method to build a set of linearly independent vectors in the latent space of a trained GAN, which we call quasi-eigenvectors. These quasi-eigenvectors have two key properties: i) They span the latent space, ii) A set of these quasi-eigenvectors map to each of the labeled features one-to-one. We show that in the case of the MNIST image data set, while the number of dimensions in latent space is large by design, 98% of the data in real space map to a sub-domain of latent space of dimensionality equal to the number of labels. We then show how the quasi-eigenvectors can be used for Latent Spectral Decomposition (LSD). We apply LSD to denoise MNIST images. Finally, using the quasi-eigenvectors, we construct rotation matrices in latent space which map to feature transformations in real space. Overall, from quasi-eigenvectors we gain insight regarding the latent space topology. |
first_indexed | 2024-03-13T00:35:31Z |
format | Article |
id | doaj.art-1da41f14562c4b6490b5311ae0cfae20 |
institution | Directory Open Access Journal |
issn | 1932-6203 |
language | English |
last_indexed | 2024-03-13T00:35:31Z |
publishDate | 2023-01-01 |
publisher | Public Library of Science (PLoS) |
record_format | Article |
series | PLoS ONE |
spelling | doaj.art-1da41f14562c4b6490b5311ae0cfae202023-07-10T05:30:58ZengPublic Library of Science (PLoS)PLoS ONE1932-62032023-01-01186e028773610.1371/journal.pone.0287736Using deep LSD to build operators in GANs latent space with meaning in real space.J Quetzalcóatl Toledo-MarínJames A GlazierGenerative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the real-space representation. Many types of generative model are used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs). Based on the idea that the latent space behaves like a vector space Radford et al. (2015), we ask whether we can expand the latent space representation of our data elements in terms of an orthonormal basis set. Here we propose a method to build a set of linearly independent vectors in the latent space of a trained GAN, which we call quasi-eigenvectors. These quasi-eigenvectors have two key properties: i) They span the latent space, ii) A set of these quasi-eigenvectors map to each of the labeled features one-to-one. We show that in the case of the MNIST image data set, while the number of dimensions in latent space is large by design, 98% of the data in real space map to a sub-domain of latent space of dimensionality equal to the number of labels. We then show how the quasi-eigenvectors can be used for Latent Spectral Decomposition (LSD). We apply LSD to denoise MNIST images. Finally, using the quasi-eigenvectors, we construct rotation matrices in latent space which map to feature transformations in real space. Overall, from quasi-eigenvectors we gain insight regarding the latent space topology.https://doi.org/10.1371/journal.pone.0287736 |
spellingShingle | J Quetzalcóatl Toledo-Marín James A Glazier Using deep LSD to build operators in GANs latent space with meaning in real space. PLoS ONE |
title | Using deep LSD to build operators in GANs latent space with meaning in real space. |
title_full | Using deep LSD to build operators in GANs latent space with meaning in real space. |
title_fullStr | Using deep LSD to build operators in GANs latent space with meaning in real space. |
title_full_unstemmed | Using deep LSD to build operators in GANs latent space with meaning in real space. |
title_short | Using deep LSD to build operators in GANs latent space with meaning in real space. |
title_sort | using deep lsd to build operators in gans latent space with meaning in real space |
url | https://doi.org/10.1371/journal.pone.0287736 |
work_keys_str_mv | AT jquetzalcoatltoledomarin usingdeeplsdtobuildoperatorsinganslatentspacewithmeaninginrealspace AT jamesaglazier usingdeeplsdtobuildoperatorsinganslatentspacewithmeaninginrealspace |