Modularization of deep networks allows cross-modality reuse: lesson learnt
Fundus photography and Optical Coherence Tomography Angiography (OCT-A) are two commonly used modalities in ophthalmic imaging. With the development of deep learning algorithms, fundus image processing, especially retinal vessel segmentation, has been extensively studied. Built upon the known operat...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer Fachmedien Wiesbaden
2021
|
Online Access: | https://hdl.handle.net/1721.1/129537 |
_version_ | 1826207416930271232 |
---|---|
author | Husvogt, Lennart Fujimoto, James G |
author2 | Massachusetts Institute of Technology. Research Laboratory of Electronics |
author_facet | Massachusetts Institute of Technology. Research Laboratory of Electronics Husvogt, Lennart Fujimoto, James G |
author_sort | Husvogt, Lennart |
collection | MIT |
description | Fundus photography and Optical Coherence Tomography Angiography (OCT-A) are two commonly used modalities in ophthalmic imaging. With the development of deep learning algorithms, fundus image processing, especially retinal vessel segmentation, has been extensively studied. Built upon the known operator theory, interpretable deep network pipelines with well-defined modules have been constructed on fundus images. In this work, we firstly train a modularized network pipeline for the task of retinal vessel segmentation on the fundus database DRIVE. The pretrained preprocessing module from the pipeline is then directly transferred onto OCT-A data for image quality enhancement without further fine-tuning. Output images show that the preprocessing net can balance the contrast, suppress noise and thereby produce vessel trees with improved connectivity in both image modalities. The visual impression is confirmed by an observer study with five OCT-A experts. Statistics of the grades by the experts indicate that the transferred module improves both the image quality and the diagnostic quality. Our work provides an example that modules within network pipelines that are built upon the known operator theory facilitate cross-modality reuse without additional training or transfer learning. |
first_indexed | 2024-09-23T13:49:20Z |
format | Article |
id | mit-1721.1/129537 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T13:49:20Z |
publishDate | 2021 |
publisher | Springer Fachmedien Wiesbaden |
record_format | dspace |
spelling | mit-1721.1/1295372022-09-28T16:24:46Z Modularization of deep networks allows cross-modality reuse: lesson learnt Husvogt, Lennart Fujimoto, James G Massachusetts Institute of Technology. Research Laboratory of Electronics Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Fundus photography and Optical Coherence Tomography Angiography (OCT-A) are two commonly used modalities in ophthalmic imaging. With the development of deep learning algorithms, fundus image processing, especially retinal vessel segmentation, has been extensively studied. Built upon the known operator theory, interpretable deep network pipelines with well-defined modules have been constructed on fundus images. In this work, we firstly train a modularized network pipeline for the task of retinal vessel segmentation on the fundus database DRIVE. The pretrained preprocessing module from the pipeline is then directly transferred onto OCT-A data for image quality enhancement without further fine-tuning. Output images show that the preprocessing net can balance the contrast, suppress noise and thereby produce vessel trees with improved connectivity in both image modalities. The visual impression is confirmed by an observer study with five OCT-A experts. Statistics of the grades by the experts indicate that the transferred module improves both the image quality and the diagnostic quality. Our work provides an example that modules within network pipelines that are built upon the known operator theory facilitate cross-modality reuse without additional training or transfer learning. European Union. Horizon 2020 Research and Innovation Programme (Grant 810316) 2021-01-25T15:53:28Z 2021-01-25T15:53:28Z 2020-02 2019-11 2020-12-15T13:49:15Z Article http://purl.org/eprint/type/ConferencePaper 9783658292676 2628-8958 https://hdl.handle.net/1721.1/129537 Wu, Weilin et al. “Modularization of deep networks allows cross-modality reuse: lesson learnt.” Informatik aktuell (February 2020): 274-279 © 2020 The Author(s) en 10.1007/978-3-658-29267-6_61 Informatik aktuell Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Springer Fachmedien Wiesbaden arXiv |
spellingShingle | Husvogt, Lennart Fujimoto, James G Modularization of deep networks allows cross-modality reuse: lesson learnt |
title | Modularization of deep networks allows cross-modality reuse: lesson learnt |
title_full | Modularization of deep networks allows cross-modality reuse: lesson learnt |
title_fullStr | Modularization of deep networks allows cross-modality reuse: lesson learnt |
title_full_unstemmed | Modularization of deep networks allows cross-modality reuse: lesson learnt |
title_short | Modularization of deep networks allows cross-modality reuse: lesson learnt |
title_sort | modularization of deep networks allows cross modality reuse lesson learnt |
url | https://hdl.handle.net/1721.1/129537 |
work_keys_str_mv | AT husvogtlennart modularizationofdeepnetworksallowscrossmodalityreuselessonlearnt AT fujimotojamesg modularizationofdeepnetworksallowscrossmodalityreuselessonlearnt |