RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks

The need for AI explainability, which involves helping humans understand why an AI algorithm arrived at a particular decision, is crucial in numerous critical applications. Although deep neural networks play a significant role in modern AI, they inherently lack transparency. Consequently, various ap...

Full description

Bibliographic Details
Main Authors: Anton Agafonov, Andrew Ponomarev
Format: Article
Language:English
Published: FRUCT 2023-11-01
Series:Proceedings of the XXth Conference of Open Innovations Association FRUCT
Subjects:
Online Access:https://www.fruct.org/publications/volume-34/fruct34/files/Aga.pdf
_version_ 1797354948519788544
author Anton Agafonov
Andrew Ponomarev
author_facet Anton Agafonov
Andrew Ponomarev
author_sort Anton Agafonov
collection DOAJ
description The need for AI explainability, which involves helping humans understand why an AI algorithm arrived at a particular decision, is crucial in numerous critical applications. Although deep neural networks play a significant role in modern AI, they inherently lack transparency. Consequently, various approaches have been suggested to clarify their decision-making processes to human users. One promising category of such approaches involves ontology-based methods. These methods have the potential to generate explanations using concepts from an ontology that are familiar to domain experts and the logical connections between these concepts. Specifically, post-hoc ontology-based explanations typically rely on concept extraction, which establishes a link between the internal representations formed by the neural network's inner layers and the domain concepts outlined in the ontology. This paper introduces the RevelioNN library, which comprises post-hoc algorithms designed to explain predictions made by deep convolutional neural networks in binary classification tasks, with a focus on leveraging ontologies. The library incorporates cutting-edge concept extraction techniques centered around constructing mapping networks. Furthermore, it provides the capability to form both logical and visual explanations for the predictions of convolutional neural networks by utilizing ontology concepts derived from their internal representations. An essential benefit of this library is its adaptability to interpret predictions from any pre-trained convolutional network implemented using the PyTorch framework.
first_indexed 2024-03-08T13:56:09Z
format Article
id doaj.art-72d9f0219dfd4253862143bf37df1f2d
institution Directory Open Access Journal
issn 2305-7254
2343-0737
language English
last_indexed 2024-03-08T13:56:09Z
publishDate 2023-11-01
publisher FRUCT
record_format Article
series Proceedings of the XXth Conference of Open Innovations Association FRUCT
spelling doaj.art-72d9f0219dfd4253862143bf37df1f2d2024-01-15T12:32:23ZengFRUCTProceedings of the XXth Conference of Open Innovations Association FRUCT2305-72542343-07372023-11-013419https://youtu.be/dSfBBvMbDA010.23919/FRUCT60429.2023.10328156RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural NetworksAnton Agafonov0Andrew Ponomarev1St. Petersburg Federal Research Center of the Russian Academy of SciencesSt. Petersburg Federal Research Center of the Russian Academy of SciencesThe need for AI explainability, which involves helping humans understand why an AI algorithm arrived at a particular decision, is crucial in numerous critical applications. Although deep neural networks play a significant role in modern AI, they inherently lack transparency. Consequently, various approaches have been suggested to clarify their decision-making processes to human users. One promising category of such approaches involves ontology-based methods. These methods have the potential to generate explanations using concepts from an ontology that are familiar to domain experts and the logical connections between these concepts. Specifically, post-hoc ontology-based explanations typically rely on concept extraction, which establishes a link between the internal representations formed by the neural network's inner layers and the domain concepts outlined in the ontology. This paper introduces the RevelioNN library, which comprises post-hoc algorithms designed to explain predictions made by deep convolutional neural networks in binary classification tasks, with a focus on leveraging ontologies. The library incorporates cutting-edge concept extraction techniques centered around constructing mapping networks. Furthermore, it provides the capability to form both logical and visual explanations for the predictions of convolutional neural networks by utilizing ontology concepts derived from their internal representations. An essential benefit of this library is its adaptability to interpret predictions from any pre-trained convolutional network implemented using the PyTorch framework.https://www.fruct.org/publications/volume-34/fruct34/files/Aga.pdfexplainable aixaiinterpretationblack-boxconvolutional neural networkontologyconcept extractionvisual explanationlogical explanation
spellingShingle Anton Agafonov
Andrew Ponomarev
RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
Proceedings of the XXth Conference of Open Innovations Association FRUCT
explainable ai
xai
interpretation
black-box
convolutional neural network
ontology
concept extraction
visual explanation
logical explanation
title RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
title_full RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
title_fullStr RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
title_full_unstemmed RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
title_short RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
title_sort revelionn retrospective extraction of visual and logical insights for ontology based interpretation of neural networks
topic explainable ai
xai
interpretation
black-box
convolutional neural network
ontology
concept extraction
visual explanation
logical explanation
url https://www.fruct.org/publications/volume-34/fruct34/files/Aga.pdf
work_keys_str_mv AT antonagafonov revelionnretrospectiveextractionofvisualandlogicalinsightsforontologybasedinterpretationofneuralnetworks
AT andrewponomarev revelionnretrospectiveextractionofvisualandlogicalinsightsforontologybasedinterpretationofneuralnetworks