Ontology Concept Extraction Algorithm for Deep Neural Networks
An important drawback of deep neural networks limiting their application in critical tasks is the lack of explainability. Recently, several methods have been proposed to explain and interpret the results obtained by deep neural networks, however, the majority of these methods are targeted mostly at...
Main Authors: | Andrew Ponomarev, Anton Agafonov |
---|---|
Format: | Article |
Language: | English |
Published: |
FRUCT
2022-11-01
|
Series: | Proceedings of the XXth Conference of Open Innovations Association FRUCT |
Subjects: | |
Online Access: | https://www.fruct.org/publications/volume-32/fruct32/files/Pon.pdf |
Similar Items
-
RevelioNN: Retrospective Extraction of Visual and Logical Insights for Ontology-based Interpretation of Neural Networks
by: Anton Agafonov, et al.
Published: (2023-11-01) -
When neuro-robots go wrong: A review
by: Muhammad Salar Khan, et al.
Published: (2023-02-01) -
Explanations for Neural Networks by Neural Networks
by: Sascha Marton, et al.
Published: (2022-01-01) -
Augmenting Deep Neural Networks with Symbolic Educational Knowledge: Towards Trustworthy and Interpretable AI for Education
by: Danial Hooshyar, et al.
Published: (2024-03-01) -
Explainable AI: A Neurally-Inspired Decision Stack Framework
by: Muhammad Salar Khan, et al.
Published: (2022-09-01)