Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods

Deep Learning (DL) based classification algorithms have been shown to achieve top results in clinical diagnosis, namely with lung cancer datasets. However, the complexity and opaqueness of the models together with the still scant training datasets call for the development of explainable modeling met...

Full description

Bibliographic Details
Main Authors: Mafalda Malafaia, Francisco Silva, Ines Neves, Tania Pereira, Helder P. Oliveira
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9919875/
_version_ 1811256563393888256
author Mafalda Malafaia
Francisco Silva
Ines Neves
Tania Pereira
Helder P. Oliveira
author_facet Mafalda Malafaia
Francisco Silva
Ines Neves
Tania Pereira
Helder P. Oliveira
author_sort Mafalda Malafaia
collection DOAJ
description Deep Learning (DL) based classification algorithms have been shown to achieve top results in clinical diagnosis, namely with lung cancer datasets. However, the complexity and opaqueness of the models together with the still scant training datasets call for the development of explainable modeling methods enabling the interpretation of the results. To this end, in this paper we propose a novel interpretability approach and demonstrate how it can be used on a malignancy lung cancer DL classifier to assess its stability and congruence even when fed a low amount of image samples. Additionally, by disclosing the regions of the medical images most relevant to the resulting classification the approach provides important insights to the correspondent clinical meaning apprehended by the algorithm. Explanations of the results provided by ten different models against the same test sample are compared. These attest the stability of the approach and the algorithm focus on the same image regions.
first_indexed 2024-04-12T17:43:16Z
format Article
id doaj.art-33dd000bf4364c9a9fc5c79c9cbfe45f
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-12T17:43:16Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-33dd000bf4364c9a9fc5c79c9cbfe45f2022-12-22T03:22:45ZengIEEEIEEE Access2169-35362022-01-011011273111274110.1109/ACCESS.2022.32148249919875Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable MethodsMafalda Malafaia0https://orcid.org/0000-0002-8081-0454Francisco Silva1https://orcid.org/0000-0003-3069-2282Ines Neves2Tania Pereira3Helder P. Oliveira4https://orcid.org/0000-0002-6193-8540INESC TEC–Institute for Systems and Computer Engineering, Technology and Science, Porto, PortugalINESC TEC–Institute for Systems and Computer Engineering, Technology and Science, Porto, PortugalINESC TEC–Institute for Systems and Computer Engineering, Technology and Science, Porto, PortugalINESC TEC–Institute for Systems and Computer Engineering, Technology and Science, Porto, PortugalINESC TEC–Institute for Systems and Computer Engineering, Technology and Science, Porto, PortugalDeep Learning (DL) based classification algorithms have been shown to achieve top results in clinical diagnosis, namely with lung cancer datasets. However, the complexity and opaqueness of the models together with the still scant training datasets call for the development of explainable modeling methods enabling the interpretation of the results. To this end, in this paper we propose a novel interpretability approach and demonstrate how it can be used on a malignancy lung cancer DL classifier to assess its stability and congruence even when fed a low amount of image samples. Additionally, by disclosing the regions of the medical images most relevant to the resulting classification the approach provides important insights to the correspondent clinical meaning apprehended by the algorithm. Explanations of the results provided by ten different models against the same test sample are compared. These attest the stability of the approach and the algorithm focus on the same image regions.https://ieeexplore.ieee.org/document/9919875/CT scancongruencedeep learningdiagnostic imaginginterpretabilitymalignancy
spellingShingle Mafalda Malafaia
Francisco Silva
Ines Neves
Tania Pereira
Helder P. Oliveira
Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods
IEEE Access
CT scan
congruence
deep learning
diagnostic imaging
interpretability
malignancy
title Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods
title_full Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods
title_fullStr Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods
title_full_unstemmed Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods
title_short Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods
title_sort robustness analysis of deep learning based lung cancer classification using explainable methods
topic CT scan
congruence
deep learning
diagnostic imaging
interpretability
malignancy
url https://ieeexplore.ieee.org/document/9919875/
work_keys_str_mv AT mafaldamalafaia robustnessanalysisofdeeplearningbasedlungcancerclassificationusingexplainablemethods
AT franciscosilva robustnessanalysisofdeeplearningbasedlungcancerclassificationusingexplainablemethods
AT inesneves robustnessanalysisofdeeplearningbasedlungcancerclassificationusingexplainablemethods
AT taniapereira robustnessanalysisofdeeplearningbasedlungcancerclassificationusingexplainablemethods
AT helderpoliveira robustnessanalysisofdeeplearningbasedlungcancerclassificationusingexplainablemethods