Enhancing Text Classification by Graph Neural Networks With Multi-Granular Topic-Aware Graph

Text classification based on graph neural networks (GNNs) has been widely studied by virtue of its potential to capture complex and across-granularity relations among texts of different types from learning on a text graph. Existing methods typically construct text graphs based on words-documents to...

Full description

Bibliographic Details
Main Authors: Yongchun Gu, Yi Wang, Heng-Ru Zhang, Jiao Wu, Xingquan Gu
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10054405/
Description
Summary:Text classification based on graph neural networks (GNNs) has been widely studied by virtue of its potential to capture complex and across-granularity relations among texts of different types from learning on a text graph. Existing methods typically construct text graphs based on words-documents to capture relevant intra-class document representations among the same documents via words-words and words-documents propagation. However, a natural problem is that polysemy words in documents may become an information medium between documents of different categories, promoting heterophily information propagation. The performance of text classification will be somewhat constrained by this issue. This paper proposes a novel text classification method based on GNN from multi-granular topic-aware perspective, referred to as Text-MGNN. Specifically, topic nodes are introduced to build a triple node set of “word, document, topic,” and multi-granularity relations are modeled on a text graph for this triple node set. The introduction of topic nodes has three significant advantages. The first is to strengthen the propagation of topics, words, and documents. The second is to enhance class-aware representation learning. The final is to mitigate the effect of heterophily information caused by polysemy words. Extensive experiments are conducted on three real-world datasets. Results validate that our proposed method outperforms 11 baselines methods.
ISSN:2169-3536