Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics
This paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of <i>Topic Scaling</i>, which ranks learned topics within the same document scal...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-11-01
|
Series: | Algorithms |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-4893/15/11/430 |
_version_ | 1827645254506905600 |
---|---|
author | Sami Diaf Ulrich Fritsche |
author_facet | Sami Diaf Ulrich Fritsche |
author_sort | Sami Diaf |
collection | DOAJ |
description | This paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of <i>Topic Scaling</i>, which ranks learned topics within the same document scale. The first stage ranks documents using <i>Wordfish</i>, a Poisson-based document-scaling method, to estimate document positions that serve, in the second stage, as a dependent variable to learn relevant topics via a supervised Latent Dirichlet Allocation. This novelty brings two innovations in text mining as it explains document positions, whose scale is a latent variable, and ranks the inferred topics on the document scale to match their occurrences within the corpus and track their evolution. Tested on the U.S. State Of The Union two-party addresses, this inductive approach reveals that each party dominates one end of the learned scale with interchangeable transitions that follow the parties’ term of office, while it shows for the corpus of German economic forecasting reports a shift in the narrative style adopted by economic institutions following the 2008 financial crisis. Besides a demonstrated high accuracy in predicting in-sample document positions from topic scores, this method unfolds further hidden topics that differentiate similar documents by increasing the number of learned topics to expand potential nested hierarchical topic structures. Compared to other popular topic models, <i>Topic Scaling</i> learns topics with respect to document similarities without specifying a time frequency to learn topic evolution, thus capturing broader topic patterns than dynamic topic models and yielding more interpretable outputs than a plain Latent Dirichlet Allocation. |
first_indexed | 2024-03-09T18:32:48Z |
format | Article |
id | doaj.art-fecaaf9807d94a2bbfe75ff1f4b7df90 |
institution | Directory Open Access Journal |
issn | 1999-4893 |
language | English |
last_indexed | 2024-03-09T18:32:48Z |
publishDate | 2022-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Algorithms |
spelling | doaj.art-fecaaf9807d94a2bbfe75ff1f4b7df902023-11-24T07:27:35ZengMDPI AGAlgorithms1999-48932022-11-01151143010.3390/a15110430Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific TopicsSami Diaf0Ulrich Fritsche1Faculty of Business, Economics and Social Sciences, Department Socioeconomics, Universität Hamburg, Welckerstr. 8, 20354 Hamburg, GermanyFaculty of Business, Economics and Social Sciences, Department Socioeconomics, Universität Hamburg, Welckerstr. 8, 20354 Hamburg, GermanyThis paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of <i>Topic Scaling</i>, which ranks learned topics within the same document scale. The first stage ranks documents using <i>Wordfish</i>, a Poisson-based document-scaling method, to estimate document positions that serve, in the second stage, as a dependent variable to learn relevant topics via a supervised Latent Dirichlet Allocation. This novelty brings two innovations in text mining as it explains document positions, whose scale is a latent variable, and ranks the inferred topics on the document scale to match their occurrences within the corpus and track their evolution. Tested on the U.S. State Of The Union two-party addresses, this inductive approach reveals that each party dominates one end of the learned scale with interchangeable transitions that follow the parties’ term of office, while it shows for the corpus of German economic forecasting reports a shift in the narrative style adopted by economic institutions following the 2008 financial crisis. Besides a demonstrated high accuracy in predicting in-sample document positions from topic scores, this method unfolds further hidden topics that differentiate similar documents by increasing the number of learned topics to expand potential nested hierarchical topic structures. Compared to other popular topic models, <i>Topic Scaling</i> learns topics with respect to document similarities without specifying a time frequency to learn topic evolution, thus capturing broader topic patterns than dynamic topic models and yielding more interpretable outputs than a plain Latent Dirichlet Allocation.https://www.mdpi.com/1999-4893/15/11/430document scalingtopic modelssupervised learning |
spellingShingle | Sami Diaf Ulrich Fritsche Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics Algorithms document scaling topic models supervised learning |
title | Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics |
title_full | Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics |
title_fullStr | Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics |
title_full_unstemmed | Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics |
title_short | Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics |
title_sort | topic scaling a joint document scaling topic model approach to learn time specific topics |
topic | document scaling topic models supervised learning |
url | https://www.mdpi.com/1999-4893/15/11/430 |
work_keys_str_mv | AT samidiaf topicscalingajointdocumentscalingtopicmodelapproachtolearntimespecifictopics AT ulrichfritsche topicscalingajointdocumentscalingtopicmodelapproachtolearntimespecifictopics |