Improving OCT Image Segmentation of Retinal Layers by Utilizing a Machine Learning Based Multistage System of Stacked Multiscale Encoders and Decoders

Optical coherence tomography (OCT)-based retinal imagery is often utilized to determine influential factors in patient progression and treatment, for which the retinal layers of the human eye are investigated to assess a patient’s health status and eyesight. In this contribution, we propose a machin...

Full description

Bibliographic Details
Main Authors: Arunodhayan Sampath Kumar, Tobias Schlosser, Holger Langner, Marc Ritter, Danny Kowerko
Format: Article
Language:English
Published: MDPI AG 2023-10-01
Series:Bioengineering
Subjects:
Online Access:https://www.mdpi.com/2306-5354/10/10/1177
Description
Summary:Optical coherence tomography (OCT)-based retinal imagery is often utilized to determine influential factors in patient progression and treatment, for which the retinal layers of the human eye are investigated to assess a patient’s health status and eyesight. In this contribution, we propose a machine learning (ML)-based multistage system of stacked multiscale encoders and decoders for the image segmentation of OCT imagery of the retinal layers to enable the following evaluation regarding the physiological and pathological states. Our proposed system’s results highlight its benefits compared to currently investigated approaches by combining commonly deployed methods from deep learning (DL) while utilizing deep neural networks (DNN). We conclude that by stacking multiple multiscale encoders and decoders, improved scores for the image segmentation task can be achieved. Our retinal-layer-based segmentation results in a final segmentation performance of up to <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>82.25</mn><mo>±</mo><mn>0.74</mn></mrow></semantics></math></inline-formula>% for the Sørensen–Dice coefficient, outperforming the current best single-stage model by <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>1.55</mn></mrow></semantics></math></inline-formula>% with a score of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>80.70</mn><mo>±</mo><mn>0.20</mn></mrow></semantics></math></inline-formula>%, given the evaluated peripapillary OCT data set. Additionally, we provide results on the data sets Duke SD-OCT, Heidelberg, and UMN to illustrate our model’s performance on especially noisy data sets.
ISSN:2306-5354