Providing Post-Hoc Explanation for Node Representation Learning Models Through Inductive Conformal Predictions

Learning with graph-structured data, such as social, biological, and financial networks, requires effective low-dimensional representations to handle their large and complex interactions. Recently, with the advances of neural networks and embedding algorithms, many unsupervised approaches have been...

Full description

Bibliographic Details
Main Author: Hogun Park
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10003193/
Description
Summary:Learning with graph-structured data, such as social, biological, and financial networks, requires effective low-dimensional representations to handle their large and complex interactions. Recently, with the advances of neural networks and embedding algorithms, many unsupervised approaches have been proposed for many downstream tasks with promising results; however, there has been limited research on interpreting the unsupervised representations and, specifically, on understanding which parts of the neighboring nodes contribute to the representation of a node. To mitigate this problem, we propose a statistical framework to interpret the learned representations. Many of the existing works, which are designed for supervised node presentation models, compute the difference in prediction scores after perturbing the edges of a candidate explanation node; however, our proposed framework leverages a conformal prediction (CP)-based statistical test to verify the importance of the candidate node in each node representation. In our evaluation, our proposed framework was verified in many experimental settings and presented promising results compared to those of the recent baseline methods.
ISSN:2169-3536