Quantification of fetal brain development from ultrasound images using interpretable deep learning

<p>Ultrasound images of the fetal brain are routinely acquired during pregnancy to assess the health and development of the fetus. It is standard clinical practice to obtain simple in-plane measurements in 2D images that can not capture the complex structural development of the fetal brain dur...

Täydet tiedot

Bibliografiset tiedot
Päätekijä: Hesse, LS
Muut tekijät: Namburete, A
Aineistotyyppi: Opinnäyte
Kieli:English
Julkaistu: 2023
Aiheet:
Kuvaus
Yhteenveto:<p>Ultrasound images of the fetal brain are routinely acquired during pregnancy to assess the health and development of the fetus. It is standard clinical practice to obtain simple in-plane measurements in 2D images that can not capture the complex structural development of the fetal brain during gestation. Therefore, in this thesis, I propose deep learning-based methods that can improve the understanding of brain development from ultrasound.</p> <br> <p>Firstly, I studied the use of deep learning for subcortical structure segmentation. Deep learning models typically need a reasonably large number of samples to effectively learn the task. However, as subcortical structure segmentation is not a task typically performed in clinic, it is challenging to obtain large sample numbers with pixel-wise annotations. For this reason, I explored subcortical segmentation in a low-data regime, demonstrating that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. The developed segmentation models were then applied to a large number of volumes of a diverse, healthy population, generating ultrasound-specific growth curves of subcortical development.</p> <br> <p>Predicting the gestational age of a fetus based on brain morphology can also be used as a way to quantify developmental patterns. While conventional deep learning methods can be used for this task, they can typically not explain their reasoning process or provide insight into the image parts that contributed to the final prediction. However, for clinical applications, it is vital to understand model behaviour to identify failure modes and gain patients’ trust. For this reason, I developed an age prediction model for fetal ultrasound that incorporates <em>guided attention</em> in the architecture to make <em>interpretable</em> and <em>local</em> brain age predictions. The attention is regularised with a segmentation loss, enforcing the network to focus on specific parts of the image. I demonstrate that guiding the network to focus on age-discriminative regions (the cortical plate and cerebellum) results in significantly improved prediction performance.</p> <br> <p>Finally, I propose an alternative approach to interpretable brain age prediction that uses an inherently interpretable network, as opposed to a <em>post-hoc</em> explanation. The network learns a set of representative examples from the training set (<em>prototypes</em>) and predicts the age of a new sample based on the distances to these prototypes. The image-level distances are constructed from patch-level distances, which are structurally matched using optimal transport. The prototypes and distance computations can both be visualised, providing an understanding of the reasoning process of the model.</p>