Self-supervised ultrasound to MRI fetal brain image synthesis
Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical ima...
Main Authors: | , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
IEEE
2020
|
_version_ | 1797081165087113216 |
---|---|
author | Jiao, J Namburete, AIL Papageorghiou, AT Noble, JA |
author_facet | Jiao, J Namburete, AIL Papageorghiou, AT Noble, JA |
author_sort | Jiao, J |
collection | OXFORD |
description | Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this article we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images. |
first_indexed | 2024-03-07T01:10:41Z |
format | Journal article |
id | oxford-uuid:8ce5a71a-4246-4d4f-a635-e2b70552c015 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T01:10:41Z |
publishDate | 2020 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:8ce5a71a-4246-4d4f-a635-e2b70552c0152022-03-26T22:47:35ZSelf-supervised ultrasound to MRI fetal brain image synthesisJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:8ce5a71a-4246-4d4f-a635-e2b70552c015EnglishSymplectic ElementsIEEE2020Jiao, JNamburete, AILPapageorghiou, ATNoble, JAFetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this article we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images. |
spellingShingle | Jiao, J Namburete, AIL Papageorghiou, AT Noble, JA Self-supervised ultrasound to MRI fetal brain image synthesis |
title | Self-supervised ultrasound to MRI fetal brain image synthesis |
title_full | Self-supervised ultrasound to MRI fetal brain image synthesis |
title_fullStr | Self-supervised ultrasound to MRI fetal brain image synthesis |
title_full_unstemmed | Self-supervised ultrasound to MRI fetal brain image synthesis |
title_short | Self-supervised ultrasound to MRI fetal brain image synthesis |
title_sort | self supervised ultrasound to mri fetal brain image synthesis |
work_keys_str_mv | AT jiaoj selfsupervisedultrasoundtomrifetalbrainimagesynthesis AT nambureteail selfsupervisedultrasoundtomrifetalbrainimagesynthesis AT papageorghiouat selfsupervisedultrasoundtomrifetalbrainimagesynthesis AT nobleja selfsupervisedultrasoundtomrifetalbrainimagesynthesis |