Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates th...
Main Authors: | Kaspars Sudars, Ivars Namatēvs, Kaspars Ozols |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-01-01
|
Series: | Journal of Imaging |
Subjects: | |
Online Access: | https://www.mdpi.com/2313-433X/8/2/30 |
Similar Items
-
Towards Explainability of the Latent Space by Disentangled Representation Learning
by: Ivars Namatēvs, et al.
Published: (2023-11-01) -
Modular Neural Networks for Osteoporosis Detection in Mandibular Cone-Beam Computed Tomography Scans
by: Ivars Namatevs, et al.
Published: (2023-09-01) -
Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension
by: Francisco S. Marcondes, et al.
Published: (2021-10-01) -
Deep Learning for Wind and Solar Energy Forecasting in Hydrogen Production
by: Arturs Nikulins, et al.
Published: (2024-02-01) -
Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification
by: Sule Tekkesinoglu, et al.
Published: (2024-01-01)