Semantic interpretation and validation of graph attention-based explanations for GNN models

In this work, we propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models. Graph Deep Learning (GDL) has emerged as a promising field for tasks like scene interpretation, leveraging flexible graph structures to con...

Full description

Bibliographic Details
Main Authors: Panagiotaki, E, De Martini, D, Kunze, L
Format: Conference item
Language:English
Published: IEEE 2024
_version_ 1826312605849878528
author Panagiotaki, E
De Martini, D
Kunze, L
author_facet Panagiotaki, E
De Martini, D
Kunze, L
author_sort Panagiotaki, E
collection OXFORD
description In this work, we propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models. Graph Deep Learning (GDL) has emerged as a promising field for tasks like scene interpretation, leveraging flexible graph structures to concisely describe complex features and relationships. As traditional explainability methods used in eXplainable AI (XAI) cannot be directly applied to such structures, graph-specific approaches are introduced. Attention has been previously employed to estimate the importance of input features in GDL, however, the fidelity of this method in generating accurate and consistent explanations has been questioned. To evaluate the validity of using attention weights as feature importance indicators, we introduce semantically-informed perturbations and correlate predicted attention weights with the accuracy of the model. Our work extends existing attention-based graph explainability methods by analysing the divergence in the attention distributions in relation to semantically sorted feature sets and the behaviour of a GNN model, efficiently estimating feature importance. We apply our methodology on a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance, effectively generating reliable post-hoc semantic explanations.
first_indexed 2024-03-07T08:14:51Z
format Conference item
id oxford-uuid:e41a9bf8-3fa9-4e3f-b891-4fbfe75d1340
institution University of Oxford
language English
last_indexed 2024-04-09T03:56:27Z
publishDate 2024
publisher IEEE
record_format dspace
spelling oxford-uuid:e41a9bf8-3fa9-4e3f-b891-4fbfe75d13402024-03-18T09:06:28ZSemantic interpretation and validation of graph attention-based explanations for GNN modelsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:e41a9bf8-3fa9-4e3f-b891-4fbfe75d1340EnglishSymplectic ElementsIEEE2024Panagiotaki, EDe Martini, DKunze, LIn this work, we propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models. Graph Deep Learning (GDL) has emerged as a promising field for tasks like scene interpretation, leveraging flexible graph structures to concisely describe complex features and relationships. As traditional explainability methods used in eXplainable AI (XAI) cannot be directly applied to such structures, graph-specific approaches are introduced. Attention has been previously employed to estimate the importance of input features in GDL, however, the fidelity of this method in generating accurate and consistent explanations has been questioned. To evaluate the validity of using attention weights as feature importance indicators, we introduce semantically-informed perturbations and correlate predicted attention weights with the accuracy of the model. Our work extends existing attention-based graph explainability methods by analysing the divergence in the attention distributions in relation to semantically sorted feature sets and the behaviour of a GNN model, efficiently estimating feature importance. We apply our methodology on a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance, effectively generating reliable post-hoc semantic explanations.
spellingShingle Panagiotaki, E
De Martini, D
Kunze, L
Semantic interpretation and validation of graph attention-based explanations for GNN models
title Semantic interpretation and validation of graph attention-based explanations for GNN models
title_full Semantic interpretation and validation of graph attention-based explanations for GNN models
title_fullStr Semantic interpretation and validation of graph attention-based explanations for GNN models
title_full_unstemmed Semantic interpretation and validation of graph attention-based explanations for GNN models
title_short Semantic interpretation and validation of graph attention-based explanations for GNN models
title_sort semantic interpretation and validation of graph attention based explanations for gnn models
work_keys_str_mv AT panagiotakie semanticinterpretationandvalidationofgraphattentionbasedexplanationsforgnnmodels
AT demartinid semanticinterpretationandvalidationofgraphattentionbasedexplanationsforgnnmodels
AT kunzel semanticinterpretationandvalidationofgraphattentionbasedexplanationsforgnnmodels