siVAE: interpretable deep generative models for single-cell transcriptomes

Abstract Neural networks such as variational autoencoders (VAE) perform dimensionality reduction for the visualization and analysis of genomic data, but are limited in their interpretability: it is unknown which data features are represented by each embedding dimension. We present siVAE, a VAE that...

Full description

Bibliographic Details
Main Authors: Yongin Choi, Ruoxin Li, Gerald Quon
Format: Article
Language:English
Published: BMC 2023-02-01
Series:Genome Biology
Online Access:https://doi.org/10.1186/s13059-023-02850-y
Description
Summary:Abstract Neural networks such as variational autoencoders (VAE) perform dimensionality reduction for the visualization and analysis of genomic data, but are limited in their interpretability: it is unknown which data features are represented by each embedding dimension. We present siVAE, a VAE that is interpretable by design, thereby enhancing downstream analysis tasks. Through interpretation, siVAE also identifies gene modules and hubs without explicit gene network inference. We use siVAE to identify gene modules whose connectivity is associated with diverse phenotypes such as iPSC neuronal differentiation efficiency and dementia, showcasing the wide applicability of interpretable generative models for genomic data analysis.
ISSN:1474-760X