Neural Embedding Allocation: Distributed Representations of Topic Models

We propose a method that uses neural embeddings to improve the performance of any given LDA-style topic model. Our method, called neural embedding allocation (NEA), deconstructs topic models (LDA or otherwise) into interpretable vector-space embeddings of words, topics, documents, authors, and so on...

Full description

Bibliographic Details
Main Authors: Kamrun Naher Keya, Yannis Papanikolaou, James R. Foulds
Format: Article
Language:English
Published: The MIT Press 2022-08-01
Series:Computational Linguistics
Online Access:http://dx.doi.org/10.1162/coli_a_00457