Measuring digital pathology throughput and tissue dropouts
Background: Digital pathology operations that precede viewing by a pathologist have a substantial impact on costs and fidelity of the digital image. Scan time and file size determine throughput and storage costs, whereas tissue omission during digital capture (“dropouts”) compromises downstream inte...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2022-01-01
|
Series: | Journal of Pathology Informatics |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2153353922007647 |
_version_ | 1797977005690454016 |
---|---|
author | George L. Mutter David S. Milstone David H. Hwang Stephanie Siegmund Alexander Bruce |
author_facet | George L. Mutter David S. Milstone David H. Hwang Stephanie Siegmund Alexander Bruce |
author_sort | George L. Mutter |
collection | DOAJ |
description | Background: Digital pathology operations that precede viewing by a pathologist have a substantial impact on costs and fidelity of the digital image. Scan time and file size determine throughput and storage costs, whereas tissue omission during digital capture (“dropouts”) compromises downstream interpretation. We compared how these variables differ across scanners. Methods: A 212 slide set randomly selected from a gynecologic-gestational pathology practice was used to benchmark scan time, file size, and image completeness. Workflows included the Hamamatsu S210 scanner (operated under default and optimized profiles) and the Leica GT450. Digital tissue dropouts were detected by the aligned overlay of macroscopic glass slide camera images (reference) with images created by the slide scanners whole slide images. Results: File size and scan time were highly correlated within each platform. Differences in GT450, default S210, and optimized S210 performance were seen in average file size (1.4 vs. 2.5 vs. 3.4 GB) and scan time (93 vs. 376 vs. 721 s). Dropouts were seen in 29.5% (186/631) of successful scans overall: from a low of 13.7% (29/212) for the optimized S210 profile, followed by 34.6% (73/211) for the GT450 and 40.4% (84/208) for the default profile S210 profile. Small dislodged fragments, “shards,” were dropped in 22.2% (140/631) of slides, followed by tissue marginalized at the glass slide edges, 6.2% (39/631). “Unique dropouts,” those for which no equivalent appeared elsewhere in the scan, occurred in only three slides. Of these, 67% (2/3) were “floaters” or contaminants from other cases. Conclusions: Scanning speed and resultant file size vary greatly by scanner type, scanner operation settings, and clinical specimen mix (tissue type, tissue area). Digital image fidelity as measured by tissue dropout frequency and dropout type also varies according to the tissue type and scanner. Dropped tissues very rarely (1/631) represent actual specimen tissues that are not represented elsewhere in the scan, so in most cases cannot alter the diagnosis. Digital pathology platforms vary in their output efficiency and image fidelity to the glass original and should be matched to the intended application. |
first_indexed | 2024-04-11T05:00:06Z |
format | Article |
id | doaj.art-a4a5a862bb1c41fcbd651009d83fb8d0 |
institution | Directory Open Access Journal |
issn | 2153-3539 |
language | English |
last_indexed | 2024-04-11T05:00:06Z |
publishDate | 2022-01-01 |
publisher | Elsevier |
record_format | Article |
series | Journal of Pathology Informatics |
spelling | doaj.art-a4a5a862bb1c41fcbd651009d83fb8d02022-12-26T04:09:09ZengElsevierJournal of Pathology Informatics2153-35392022-01-0113100170Measuring digital pathology throughput and tissue dropoutsGeorge L. Mutter0David S. Milstone1David H. Hwang2Stephanie Siegmund3Alexander Bruce4Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA; Department of Pathology, Harvard Medical School, Boston, MA, USA; Corresponding author at: Department of Pathology, Brigham and Women’s Hospital, 75 Francis Street, Boston, MA 02115, USA.Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA; Department of Pathology, Harvard Medical School, Boston, MA, USADepartment of Pathology, Brigham and Women’s Hospital, Boston, MA, USA; Department of Pathology, Harvard Medical School, Boston, MA, USADepartment of Pathology, Brigham and Women’s Hospital, Boston, MA, USA; Department of Pathology, Harvard Medical School, Boston, MA, USADepartment of Pathology, Brigham and Women’s Hospital, Boston, MA, USABackground: Digital pathology operations that precede viewing by a pathologist have a substantial impact on costs and fidelity of the digital image. Scan time and file size determine throughput and storage costs, whereas tissue omission during digital capture (“dropouts”) compromises downstream interpretation. We compared how these variables differ across scanners. Methods: A 212 slide set randomly selected from a gynecologic-gestational pathology practice was used to benchmark scan time, file size, and image completeness. Workflows included the Hamamatsu S210 scanner (operated under default and optimized profiles) and the Leica GT450. Digital tissue dropouts were detected by the aligned overlay of macroscopic glass slide camera images (reference) with images created by the slide scanners whole slide images. Results: File size and scan time were highly correlated within each platform. Differences in GT450, default S210, and optimized S210 performance were seen in average file size (1.4 vs. 2.5 vs. 3.4 GB) and scan time (93 vs. 376 vs. 721 s). Dropouts were seen in 29.5% (186/631) of successful scans overall: from a low of 13.7% (29/212) for the optimized S210 profile, followed by 34.6% (73/211) for the GT450 and 40.4% (84/208) for the default profile S210 profile. Small dislodged fragments, “shards,” were dropped in 22.2% (140/631) of slides, followed by tissue marginalized at the glass slide edges, 6.2% (39/631). “Unique dropouts,” those for which no equivalent appeared elsewhere in the scan, occurred in only three slides. Of these, 67% (2/3) were “floaters” or contaminants from other cases. Conclusions: Scanning speed and resultant file size vary greatly by scanner type, scanner operation settings, and clinical specimen mix (tissue type, tissue area). Digital image fidelity as measured by tissue dropout frequency and dropout type also varies according to the tissue type and scanner. Dropped tissues very rarely (1/631) represent actual specimen tissues that are not represented elsewhere in the scan, so in most cases cannot alter the diagnosis. Digital pathology platforms vary in their output efficiency and image fidelity to the glass original and should be matched to the intended application.http://www.sciencedirect.com/science/article/pii/S2153353922007647Digital pathologyDropoutsImage analysisOperationsScannerWhole-slide imaging |
spellingShingle | George L. Mutter David S. Milstone David H. Hwang Stephanie Siegmund Alexander Bruce Measuring digital pathology throughput and tissue dropouts Journal of Pathology Informatics Digital pathology Dropouts Image analysis Operations Scanner Whole-slide imaging |
title | Measuring digital pathology throughput and tissue dropouts |
title_full | Measuring digital pathology throughput and tissue dropouts |
title_fullStr | Measuring digital pathology throughput and tissue dropouts |
title_full_unstemmed | Measuring digital pathology throughput and tissue dropouts |
title_short | Measuring digital pathology throughput and tissue dropouts |
title_sort | measuring digital pathology throughput and tissue dropouts |
topic | Digital pathology Dropouts Image analysis Operations Scanner Whole-slide imaging |
url | http://www.sciencedirect.com/science/article/pii/S2153353922007647 |
work_keys_str_mv | AT georgelmutter measuringdigitalpathologythroughputandtissuedropouts AT davidsmilstone measuringdigitalpathologythroughputandtissuedropouts AT davidhhwang measuringdigitalpathologythroughputandtissuedropouts AT stephaniesiegmund measuringdigitalpathologythroughputandtissuedropouts AT alexanderbruce measuringdigitalpathologythroughputandtissuedropouts |