Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety

In this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans...

Full description

Bibliographic Details
Main Authors: Laves Max-Heinrich, Ihler Sontje, Ortmaier Tobias, Kahrs Lüder A.
Format: Article
Language:English
Published: De Gruyter 2019-09-01
Series:Current Directions in Biomedical Engineering
Subjects:
Online Access:https://doi.org/10.1515/cdbme-2019-0057
_version_ 1797989331982352384
author Laves Max-Heinrich
Ihler Sontje
Ortmaier Tobias
Kahrs Lüder A.
author_facet Laves Max-Heinrich
Ihler Sontje
Ortmaier Tobias
Kahrs Lüder A.
author_sort Laves Max-Heinrich
collection DOAJ
description In this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.
first_indexed 2024-04-11T08:17:25Z
format Article
id doaj.art-e225803af6d142f5b6dd2b99ebdaae7f
institution Directory Open Access Journal
issn 2364-5504
language English
last_indexed 2024-04-11T08:17:25Z
publishDate 2019-09-01
publisher De Gruyter
record_format Article
series Current Directions in Biomedical Engineering
spelling doaj.art-e225803af6d142f5b6dd2b99ebdaae7f2022-12-22T04:35:04ZengDe GruyterCurrent Directions in Biomedical Engineering2364-55042019-09-015122322610.1515/cdbme-2019-0057cdbme-2019-0057Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safetyLaves Max-Heinrich0Ihler Sontje1Ortmaier Tobias2Kahrs Lüder A.3Institute of Mechatronic Systems, Appelstr. 11A,Hannover, GermanyInstitute of Mechatronic Systems, Appelstr. 11A,Hannover, GermanyInstitute of Mechatronic Systems, Appelstr. 11A,Hannover, GermanyCenter for Image Guided Innovation and Therapeutic Intervention (CIGITI), The Hospital for Sick Children, 555 University Ave,Toronto, CanadaIn this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.https://doi.org/10.1515/cdbme-2019-0057bayesian approximationoptical coherence tomographyretinamachine learning
spellingShingle Laves Max-Heinrich
Ihler Sontje
Ortmaier Tobias
Kahrs Lüder A.
Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
Current Directions in Biomedical Engineering
bayesian approximation
optical coherence tomography
retina
machine learning
title Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
title_full Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
title_fullStr Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
title_full_unstemmed Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
title_short Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
title_sort quantifying the uncertainty of deep learning based computer aided diagnosis for patient safety
topic bayesian approximation
optical coherence tomography
retina
machine learning
url https://doi.org/10.1515/cdbme-2019-0057
work_keys_str_mv AT lavesmaxheinrich quantifyingtheuncertaintyofdeeplearningbasedcomputeraideddiagnosisforpatientsafety
AT ihlersontje quantifyingtheuncertaintyofdeeplearningbasedcomputeraideddiagnosisforpatientsafety
AT ortmaiertobias quantifyingtheuncertaintyofdeeplearningbasedcomputeraideddiagnosisforpatientsafety
AT kahrsludera quantifyingtheuncertaintyofdeeplearningbasedcomputeraideddiagnosisforpatientsafety