Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation

Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relev...

Full description

Bibliographic Details
Main Authors: Ihsan Ullah, Andre Rios, Vaibhav Gala, Susan Mckeever
Format: Article
Language:English
Published: MDPI AG 2021-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/1/136
_version_ 1797499662384496640
author Ihsan Ullah
Andre Rios
Vaibhav Gala
Susan Mckeever
author_facet Ihsan Ullah
Andre Rios
Vaibhav Gala
Susan Mckeever
author_sort Ihsan Ullah
collection DOAJ
description Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.
first_indexed 2024-03-10T03:50:38Z
format Article
id doaj.art-c86d6d88f5c94af4ae9f656d1cbfd10a
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-03-10T03:50:38Z
publishDate 2021-12-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-c86d6d88f5c94af4ae9f656d1cbfd10a2023-11-23T11:08:16ZengMDPI AGApplied Sciences2076-34172021-12-0112113610.3390/app12010136Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance PropagationIhsan Ullah0Andre Rios1Vaibhav Gala2Susan Mckeever3CeADAR Irelands Center for Applied AI, University College Dublin, D04V2N9 Dublin, IrelandCeADAR Irelands Center for Applied AI, Technological University Dublin, D07ADY7 Dublin, IrelandCeADAR Irelands Center for Applied AI, Technological University Dublin, D07ADY7 Dublin, IrelandCeADAR Irelands Center for Applied AI, Technological University Dublin, D07ADY7 Dublin, IrelandTrust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.https://www.mdpi.com/2076-3417/12/1/136explainability1D-CNNstructured datalayer-wise relevance propagationdeep learningtransparency
spellingShingle Ihsan Ullah
Andre Rios
Vaibhav Gala
Susan Mckeever
Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
Applied Sciences
explainability
1D-CNN
structured data
layer-wise relevance propagation
deep learning
transparency
title Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
title_full Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
title_fullStr Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
title_full_unstemmed Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
title_short Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
title_sort explaining deep learning models for tabular data using layer wise relevance propagation
topic explainability
1D-CNN
structured data
layer-wise relevance propagation
deep learning
transparency
url https://www.mdpi.com/2076-3417/12/1/136
work_keys_str_mv AT ihsanullah explainingdeeplearningmodelsfortabulardatausinglayerwiserelevancepropagation
AT andrerios explainingdeeplearningmodelsfortabulardatausinglayerwiserelevancepropagation
AT vaibhavgala explainingdeeplearningmodelsfortabulardatausinglayerwiserelevancepropagation
AT susanmckeever explainingdeeplearningmodelsfortabulardatausinglayerwiserelevancepropagation