Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare

The lack of interpretability in artificial intelligence models (i.e., deep learning, machine learning, and rules-based) is an obstacle to their widespread adoption in the healthcare domain. The absence of understandability and transparency frequently leads to (i) inadequate accountability and (ii) a...

Full description

Bibliographic Details
Main Authors: Mohammad Ennab, Hamid Mcheick
Format: Article
Language:English
Published: MDPI AG 2022-06-01
Series:Diagnostics
Subjects:
Online Access:https://www.mdpi.com/2075-4418/12/7/1557
_version_ 1797433663619596288
author Mohammad Ennab
Hamid Mcheick
author_facet Mohammad Ennab
Hamid Mcheick
author_sort Mohammad Ennab
collection DOAJ
description The lack of interpretability in artificial intelligence models (i.e., deep learning, machine learning, and rules-based) is an obstacle to their widespread adoption in the healthcare domain. The absence of understandability and transparency frequently leads to (i) inadequate accountability and (ii) a consequent reduction in the quality of the predictive results of the models. On the other hand, the existence of interpretability in the predictions of AI models will facilitate the understanding and trust of the clinicians in these complex models. The data protection regulations worldwide emphasize the relevance of the plausibility and verifiability of AI models’ predictions. In response and to take a role in tackling this challenge, we designed the interpretability-based model with algorithms that achieve human-like reasoning abilities through statistical analysis of the datasets by calculating the relative weights of the variables of the features from the medical images and the patient symptoms. The relative weights represented the importance of the variables in predictive decision-making. In addition, the relative weights were used to find the positive and negative probabilities of having the disease, which indicated high fidelity explanations. Hence, the primary goal of our model is to shed light and give insights into the prediction process of the models, as well as to explain how the model predictions have resulted. Consequently, our model contributes by demonstrating accuracy. Furthermore, two experiments on COVID-19 datasets demonstrated the effectiveness and interpretability of the new model.
first_indexed 2024-03-09T10:20:11Z
format Article
id doaj.art-5141aa050f8b411fbfc4573efb39557f
institution Directory Open Access Journal
issn 2075-4418
language English
last_indexed 2024-03-09T10:20:11Z
publishDate 2022-06-01
publisher MDPI AG
record_format Article
series Diagnostics
spelling doaj.art-5141aa050f8b411fbfc4573efb39557f2023-12-01T22:03:17ZengMDPI AGDiagnostics2075-44182022-06-01127155710.3390/diagnostics12071557Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in HealthcareMohammad Ennab0Hamid Mcheick1Department of Computer Sciences and Mathematics, University of Québec at Chicoutimi, Chicoutimi, QC G7H 2B1, CanadaDepartment of Computer Sciences and Mathematics, University of Québec at Chicoutimi, Chicoutimi, QC G7H 2B1, CanadaThe lack of interpretability in artificial intelligence models (i.e., deep learning, machine learning, and rules-based) is an obstacle to their widespread adoption in the healthcare domain. The absence of understandability and transparency frequently leads to (i) inadequate accountability and (ii) a consequent reduction in the quality of the predictive results of the models. On the other hand, the existence of interpretability in the predictions of AI models will facilitate the understanding and trust of the clinicians in these complex models. The data protection regulations worldwide emphasize the relevance of the plausibility and verifiability of AI models’ predictions. In response and to take a role in tackling this challenge, we designed the interpretability-based model with algorithms that achieve human-like reasoning abilities through statistical analysis of the datasets by calculating the relative weights of the variables of the features from the medical images and the patient symptoms. The relative weights represented the importance of the variables in predictive decision-making. In addition, the relative weights were used to find the positive and negative probabilities of having the disease, which indicated high fidelity explanations. Hence, the primary goal of our model is to shed light and give insights into the prediction process of the models, as well as to explain how the model predictions have resulted. Consequently, our model contributes by demonstrating accuracy. Furthermore, two experiments on COVID-19 datasets demonstrated the effectiveness and interpretability of the new model.https://www.mdpi.com/2075-4418/12/7/1557interpretabilityartificial intelligencerelative weightsprobability
spellingShingle Mohammad Ennab
Hamid Mcheick
Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare
Diagnostics
interpretability
artificial intelligence
relative weights
probability
title Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare
title_full Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare
title_fullStr Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare
title_full_unstemmed Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare
title_short Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare
title_sort designing an interpretability based model to explain the artificial intelligence algorithms in healthcare
topic interpretability
artificial intelligence
relative weights
probability
url https://www.mdpi.com/2075-4418/12/7/1557
work_keys_str_mv AT mohammadennab designinganinterpretabilitybasedmodeltoexplaintheartificialintelligencealgorithmsinhealthcare
AT hamidmcheick designinganinterpretabilitybasedmodeltoexplaintheartificialintelligencealgorithmsinhealthcare