Reproducibility of deep learning in digital pathology whole slide image analysis.
For a method to be widely adopted in medical research or clinical practice, it needs to be reproducible so that clinicians and regulators can have confidence in its use. Machine learning and deep learning have a particular set of challenges around reproducibility. Small differences in the settings o...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2022-12-01
|
Series: | PLOS Digital Health |
Online Access: | https://doi.org/10.1371/journal.pdig.0000145 |
_version_ | 1797695574523248640 |
---|---|
author | Christina Fell Mahnaz Mohammadi David Morrison Ognjen Arandjelovic Peter Caie David Harris-Birtill |
author_facet | Christina Fell Mahnaz Mohammadi David Morrison Ognjen Arandjelovic Peter Caie David Harris-Birtill |
author_sort | Christina Fell |
collection | DOAJ |
description | For a method to be widely adopted in medical research or clinical practice, it needs to be reproducible so that clinicians and regulators can have confidence in its use. Machine learning and deep learning have a particular set of challenges around reproducibility. Small differences in the settings or the data used for training a model can lead to large differences in the outcomes of experiments. In this work, three top-performing algorithms from the Camelyon grand challenges are reproduced using only information presented in the associated papers and the results are then compared to those reported. Seemingly minor details were found to be critical to performance and yet their importance is difficult to appreciate until the actual reproduction is attempted. We observed that authors generally describe the key technical aspects of their models well but fail to maintain the same reporting standards when it comes to data preprocessing which is essential to reproducibility. As an important contribution of the present study and its findings, we introduce a reproducibility checklist that tabulates information that needs to be reported in histopathology ML-based work in order to make it reproducible. |
first_indexed | 2024-03-12T03:14:15Z |
format | Article |
id | doaj.art-844fbea4894248fc843fdccc609b6db6 |
institution | Directory Open Access Journal |
issn | 2767-3170 |
language | English |
last_indexed | 2024-03-12T03:14:15Z |
publishDate | 2022-12-01 |
publisher | Public Library of Science (PLoS) |
record_format | Article |
series | PLOS Digital Health |
spelling | doaj.art-844fbea4894248fc843fdccc609b6db62023-09-03T14:14:34ZengPublic Library of Science (PLoS)PLOS Digital Health2767-31702022-12-01112e000014510.1371/journal.pdig.0000145Reproducibility of deep learning in digital pathology whole slide image analysis.Christina FellMahnaz MohammadiDavid MorrisonOgnjen ArandjelovicPeter CaieDavid Harris-BirtillFor a method to be widely adopted in medical research or clinical practice, it needs to be reproducible so that clinicians and regulators can have confidence in its use. Machine learning and deep learning have a particular set of challenges around reproducibility. Small differences in the settings or the data used for training a model can lead to large differences in the outcomes of experiments. In this work, three top-performing algorithms from the Camelyon grand challenges are reproduced using only information presented in the associated papers and the results are then compared to those reported. Seemingly minor details were found to be critical to performance and yet their importance is difficult to appreciate until the actual reproduction is attempted. We observed that authors generally describe the key technical aspects of their models well but fail to maintain the same reporting standards when it comes to data preprocessing which is essential to reproducibility. As an important contribution of the present study and its findings, we introduce a reproducibility checklist that tabulates information that needs to be reported in histopathology ML-based work in order to make it reproducible.https://doi.org/10.1371/journal.pdig.0000145 |
spellingShingle | Christina Fell Mahnaz Mohammadi David Morrison Ognjen Arandjelovic Peter Caie David Harris-Birtill Reproducibility of deep learning in digital pathology whole slide image analysis. PLOS Digital Health |
title | Reproducibility of deep learning in digital pathology whole slide image analysis. |
title_full | Reproducibility of deep learning in digital pathology whole slide image analysis. |
title_fullStr | Reproducibility of deep learning in digital pathology whole slide image analysis. |
title_full_unstemmed | Reproducibility of deep learning in digital pathology whole slide image analysis. |
title_short | Reproducibility of deep learning in digital pathology whole slide image analysis. |
title_sort | reproducibility of deep learning in digital pathology whole slide image analysis |
url | https://doi.org/10.1371/journal.pdig.0000145 |
work_keys_str_mv | AT christinafell reproducibilityofdeeplearningindigitalpathologywholeslideimageanalysis AT mahnazmohammadi reproducibilityofdeeplearningindigitalpathologywholeslideimageanalysis AT davidmorrison reproducibilityofdeeplearningindigitalpathologywholeslideimageanalysis AT ognjenarandjelovic reproducibilityofdeeplearningindigitalpathologywholeslideimageanalysis AT petercaie reproducibilityofdeeplearningindigitalpathologywholeslideimageanalysis AT davidharrisbirtill reproducibilityofdeeplearningindigitalpathologywholeslideimageanalysis |