Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning

The quantification of subcortical volume development from 3D fetal ultrasound can provide important diagnostic information during pregnancy monitoring. However, manual segmentation of subcortical structures in ultrasound volumes is time-consuming and challenging due to low soft tissue contrast, spec...

Descripción completa

Detalles Bibliográficos
Autores principales: Hesse, LS, Aliasi, M, Moser, F, Haak, MC, Xie, W, Jenkinson, M, Namburete, AIL
Otros Autores: INTERGROWTH-21st Consortium
Formato: Journal article
Lenguaje:English
Publicado: Elsevier 2022
_version_ 1826307502379106304
author Hesse, LS
Aliasi, M
Moser, F
Haak, MC
Xie, W
Jenkinson, M
Namburete, AIL
author2 INTERGROWTH-21st Consortium
author_facet INTERGROWTH-21st Consortium
Hesse, LS
Aliasi, M
Moser, F
Haak, MC
Xie, W
Jenkinson, M
Namburete, AIL
author_sort Hesse, LS
collection OXFORD
description The quantification of subcortical volume development from 3D fetal ultrasound can provide important diagnostic information during pregnancy monitoring. However, manual segmentation of subcortical structures in ultrasound volumes is time-consuming and challenging due to low soft tissue contrast, speckle and shadowing artifacts. For this reason, we developed a convolutional neural network (CNN) for the automated segmentation of the choroid plexus (CP), lateral posterior ventricle horns (LPVH), cavum septum pellucidum et vergae (CSPV), and cerebellum (CB) from 3D ultrasound. As ground-truth labels are scarce and expensive to obtain, we applied few-shot learning, in which only a small number of manual annotations (n = 9) are used to train a CNN. We compared training a CNN with only a few individually annotated volumes versus many weakly labelled volumes obtained from atlas-based segmentations. This showed that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. Finally, the trained models were applied to a large number (n = 278) of ultrasound image volumes of a diverse, healthy population, obtaining novel US-specific growth curves of the respective structures during the second trimester of gestation.
first_indexed 2024-03-07T07:04:02Z
format Journal article
id oxford-uuid:f4fbf2f1-bb75-4df4-92e5-c6ca1762dfdc
institution University of Oxford
language English
last_indexed 2024-03-07T07:04:02Z
publishDate 2022
publisher Elsevier
record_format dspace
spelling oxford-uuid:f4fbf2f1-bb75-4df4-92e5-c6ca1762dfdc2022-03-31T15:59:09ZSubcortical segmentation of the fetal brain in 3D ultrasound using deep learningJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:f4fbf2f1-bb75-4df4-92e5-c6ca1762dfdcEnglishSymplectic ElementsElsevier2022Hesse, LSAliasi, MMoser, FHaak, MCXie, WJenkinson, MNamburete, AILINTERGROWTH-21st ConsortiumThe quantification of subcortical volume development from 3D fetal ultrasound can provide important diagnostic information during pregnancy monitoring. However, manual segmentation of subcortical structures in ultrasound volumes is time-consuming and challenging due to low soft tissue contrast, speckle and shadowing artifacts. For this reason, we developed a convolutional neural network (CNN) for the automated segmentation of the choroid plexus (CP), lateral posterior ventricle horns (LPVH), cavum septum pellucidum et vergae (CSPV), and cerebellum (CB) from 3D ultrasound. As ground-truth labels are scarce and expensive to obtain, we applied few-shot learning, in which only a small number of manual annotations (n = 9) are used to train a CNN. We compared training a CNN with only a few individually annotated volumes versus many weakly labelled volumes obtained from atlas-based segmentations. This showed that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. Finally, the trained models were applied to a large number (n = 278) of ultrasound image volumes of a diverse, healthy population, obtaining novel US-specific growth curves of the respective structures during the second trimester of gestation.
spellingShingle Hesse, LS
Aliasi, M
Moser, F
Haak, MC
Xie, W
Jenkinson, M
Namburete, AIL
Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
title Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
title_full Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
title_fullStr Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
title_full_unstemmed Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
title_short Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
title_sort subcortical segmentation of the fetal brain in 3d ultrasound using deep learning
work_keys_str_mv AT hessels subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning
AT aliasim subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning
AT moserf subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning
AT haakmc subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning
AT xiew subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning
AT jenkinsonm subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning
AT nambureteail subcorticalsegmentationofthefetalbrainin3dultrasoundusingdeeplearning