Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging

<p>Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes in the womb by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited...

Full description

Bibliographic Details
Main Author: Yeung, PH
Other Authors: Namburete, A
Format: Thesis
Language:English
Published: 2022
Subjects:
_version_ 1797109671250624512
author Yeung, PH
author2 Namburete, A
author_facet Namburete, A
Yeung, PH
author_sort Yeung, PH
collection OXFORD
description <p>Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes in the womb by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand two-dimensional (2D) ultrasound imaging, in contrast, is routinely used in standard obstetric exams. The low cost and portability of 2D ultrasound render it uniquely suitable for use in low- and middle-income settings. However, high level of expertise is always involved and it inherently lacks a 3D representation of the anatomies, which limit its potential for more accessible and advanced assessment. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, this thesis presents a deep learning-based framework for optimizing the utilization and diagnostic power of 2D freehand ultrasound in fetal brain imaging.</p> <p>First, a localization model is presented to predict the location of 2D ultrasound fetal brain scans in the 3D brain atlas. It is trained by sampling 2D slices from aligned 3D fetal brain volumes, such that heavy annotations for each 2D scan are not required. This can be used for scanning guidance and standard plane localization.</p> <p>An unsupervised methodology is further proposed to adapt a trained localization model to freehand 2D ultrasound images acquired from arbitrary domains, for example sonographers, manufacturers and acquisition protocols. This enables the model to be used at the bedside in practice, where it can be fine-tuned with just the images acquired in any arbitrary domains before inference.</p> <p>Building upon the ability to localize 2D scans in the 3D brain atlas, a framework is further presented to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound images using implicit representation. With this slice-to-volume reconstruction framework, additional 3D information can be extracted from the 2D freehand scans.</p> <p>Finally, a semi-automatic model, trained only on raw 3D volumes without any manual annotation, is presented to segment any arbitrary structures of interest in 3D medical volumes, while only requiring manual annotation of a single slice during inference. The model is tested on wide variety of medical imaging datasets and anatomical structures, verifying its generalizability.</p> <p>In the design of the framework presented in this thesis, three fundamental principles, namely minimal human annotation, generalizability and sensorless operation, are followed to optimize its seamless integration into the clinical workflow. This may modernize freehand routine scanning and enhance its accessibility, while maximizing the clinical information gained from routine scans acquired as part of the continuum of pregnancy care.</p>
first_indexed 2024-03-07T07:44:47Z
format Thesis
id oxford-uuid:fce88a5f-64a3-47d7-9c60-f492601abfee
institution University of Oxford
language English
last_indexed 2024-03-07T07:44:47Z
publishDate 2022
record_format dspace
spelling oxford-uuid:fce88a5f-64a3-47d7-9c60-f492601abfee2023-05-23T08:28:14ZDeep learning techniques to bridge the gap between 2D and 3D ultrasound imagingThesishttp://purl.org/coar/resource_type/c_db06uuid:fce88a5f-64a3-47d7-9c60-f492601abfeeThree-dimensional imaging in medicineImage analysisDiagnostic ultrasonic imagingDeep learning (Machine learning)Biomedical engineeringEnglishHyrax Deposit2022Yeung, PHNamburete, AXie, WNoble, JGrau Colomer, VWein, W<p>Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes in the womb by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand two-dimensional (2D) ultrasound imaging, in contrast, is routinely used in standard obstetric exams. The low cost and portability of 2D ultrasound render it uniquely suitable for use in low- and middle-income settings. However, high level of expertise is always involved and it inherently lacks a 3D representation of the anatomies, which limit its potential for more accessible and advanced assessment. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, this thesis presents a deep learning-based framework for optimizing the utilization and diagnostic power of 2D freehand ultrasound in fetal brain imaging.</p> <p>First, a localization model is presented to predict the location of 2D ultrasound fetal brain scans in the 3D brain atlas. It is trained by sampling 2D slices from aligned 3D fetal brain volumes, such that heavy annotations for each 2D scan are not required. This can be used for scanning guidance and standard plane localization.</p> <p>An unsupervised methodology is further proposed to adapt a trained localization model to freehand 2D ultrasound images acquired from arbitrary domains, for example sonographers, manufacturers and acquisition protocols. This enables the model to be used at the bedside in practice, where it can be fine-tuned with just the images acquired in any arbitrary domains before inference.</p> <p>Building upon the ability to localize 2D scans in the 3D brain atlas, a framework is further presented to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound images using implicit representation. With this slice-to-volume reconstruction framework, additional 3D information can be extracted from the 2D freehand scans.</p> <p>Finally, a semi-automatic model, trained only on raw 3D volumes without any manual annotation, is presented to segment any arbitrary structures of interest in 3D medical volumes, while only requiring manual annotation of a single slice during inference. The model is tested on wide variety of medical imaging datasets and anatomical structures, verifying its generalizability.</p> <p>In the design of the framework presented in this thesis, three fundamental principles, namely minimal human annotation, generalizability and sensorless operation, are followed to optimize its seamless integration into the clinical workflow. This may modernize freehand routine scanning and enhance its accessibility, while maximizing the clinical information gained from routine scans acquired as part of the continuum of pregnancy care.</p>
spellingShingle Three-dimensional imaging in medicine
Image analysis
Diagnostic ultrasonic imaging
Deep learning (Machine learning)
Biomedical engineering
Yeung, PH
Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging
title Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging
title_full Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging
title_fullStr Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging
title_full_unstemmed Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging
title_short Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging
title_sort deep learning techniques to bridge the gap between 2d and 3d ultrasound imaging
topic Three-dimensional imaging in medicine
Image analysis
Diagnostic ultrasonic imaging
Deep learning (Machine learning)
Biomedical engineering
work_keys_str_mv AT yeungph deeplearningtechniquestobridgethegapbetween2dand3dultrasoundimaging