A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging

Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms...

Full description

Bibliographic Details
Main Authors: Mehmet A. Gulum, Christopher M. Trombley, Mehmed Kantardzic
Format: Article
Language:English
Published: MDPI AG 2021-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/11/10/4573
Description
Summary:Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
ISSN:2076-3417