Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach

Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates th...

Full description

Bibliographic Details
Main Authors: Kaspars Sudars, Ivars Namatēvs, Kaspars Ozols
Format: Article
Language:English
Published: MDPI AG 2022-01-01
Series:Journal of Imaging
Subjects:
Online Access:https://www.mdpi.com/2313-433X/8/2/30
_version_ 1797479031814225920
author Kaspars Sudars
Ivars Namatēvs
Kaspars Ozols
author_facet Kaspars Sudars
Ivars Namatēvs
Kaspars Ozols
author_sort Kaspars Sudars
collection DOAJ
description Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.
first_indexed 2024-03-09T21:40:04Z
format Article
id doaj.art-580ab98b7b7e47dd8d1fedbcc42eec84
institution Directory Open Access Journal
issn 2313-433X
language English
last_indexed 2024-03-09T21:40:04Z
publishDate 2022-01-01
publisher MDPI AG
record_format Article
series Journal of Imaging
spelling doaj.art-580ab98b7b7e47dd8d1fedbcc42eec842023-11-23T20:33:13ZengMDPI AGJournal of Imaging2313-433X2022-01-01823010.3390/jimaging8020030Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability ApproachKaspars Sudars0Ivars Namatēvs1Kaspars Ozols2Institute of Electronics and Computer Science, Dzerbenes Str.14, LV-1006 Riga, LatviaInstitute of Electronics and Computer Science, Dzerbenes Str.14, LV-1006 Riga, LatviaInstitute of Electronics and Computer Science, Dzerbenes Str.14, LV-1006 Riga, LatviaModel understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.https://www.mdpi.com/2313-433X/8/2/30explainable AIconvolutional neural networknetwork compression
spellingShingle Kaspars Sudars
Ivars Namatēvs
Kaspars Ozols
Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
Journal of Imaging
explainable AI
convolutional neural network
network compression
title Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
title_full Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
title_fullStr Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
title_full_unstemmed Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
title_short Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
title_sort improving performance of the prystine traffic sign classification by using a perturbation based explainability approach
topic explainable AI
convolutional neural network
network compression
url https://www.mdpi.com/2313-433X/8/2/30
work_keys_str_mv AT kasparssudars improvingperformanceoftheprystinetrafficsignclassificationbyusingaperturbationbasedexplainabilityapproach
AT ivarsnamatevs improvingperformanceoftheprystinetrafficsignclassificationbyusingaperturbationbasedexplainabilityapproach
AT kasparsozols improvingperformanceoftheprystinetrafficsignclassificationbyusingaperturbationbasedexplainabilityapproach