MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network

Abstract Purpose This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI so...

Full description

Bibliographic Details
Main Authors: Mojtaba Safari, Ali Fatemi, Louis Archambault
Format: Article
Language:English
Published: BMC 2023-12-01
Series:BMC Medical Imaging
Subjects:
Online Access:https://doi.org/10.1186/s12880-023-01160-w
_version_ 1797397675737350144
author Mojtaba Safari
Ali Fatemi
Louis Archambault
author_facet Mojtaba Safari
Ali Fatemi
Louis Archambault
author_sort Mojtaba Safari
collection DOAJ
description Abstract Purpose This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI soft tissue contrast to improve target delineation and to reduce the radiotherapy planning time. Methods We used a publicly available multicenter medical dataset (GLIS-RT, 230 patients) from the Cancer Imaging Archive. To improve the models generalization, we consider different imaging protocols and patients with various brain tumor types, including metastases. The proposed MedFusionGAN consisted of one generator network and one discriminator network trained in an adversarial scenario. Content, style, and L1 losses were used for training the generator to preserve the texture and structure information of the MRI and CT images. Results The MedFusionGAN successfully generates fused images with MRI soft-tissue and CT bone contrast. The results of the MedFusionGAN were quantitatively and qualitatively compared with seven traditional and eight deep learning (DL) state-of-the-art methods. Qualitatively, our method fused the source images with the highest spatial resolution without adding the image artifacts. We reported nine quantitative metrics to quantify the preservation of structural similarity, contrast, distortion level, and image edges in fused images. Our method outperformed both traditional and DL methods on six out of nine metrics. And it got the second performance rank for three and two quantitative metrics when compared with traditional and DL methods, respectively. To compare soft-tissue contrast, intensity profile along tumor and tumor contours of the fusion methods were evaluated. MedFusionGAN provides a more consistent, better intensity profile, and a better segmentation performance. Conclusions The proposed end-to-end unsupervised method successfully fused MRI and CT images. The fused image could improve targets and OARs delineation, which is an important aspect of radiotherapy treatment planning.
first_indexed 2024-03-09T01:13:36Z
format Article
id doaj.art-5a00911604054f58a5ed80029ef7f695
institution Directory Open Access Journal
issn 1471-2342
language English
last_indexed 2024-03-09T01:13:36Z
publishDate 2023-12-01
publisher BMC
record_format Article
series BMC Medical Imaging
spelling doaj.art-5a00911604054f58a5ed80029ef7f6952023-12-10T12:36:04ZengBMCBMC Medical Imaging1471-23422023-12-0123111610.1186/s12880-023-01160-wMedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial networkMojtaba Safari0Ali Fatemi1Louis Archambault2Département de Physique, de génie Physique et d’Optique, et Centre de Recherche sur le Cancer, Université LavalDepartment of Physics, Jackson State UniversityDépartement de Physique, de génie Physique et d’Optique, et Centre de Recherche sur le Cancer, Université LavalAbstract Purpose This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI soft tissue contrast to improve target delineation and to reduce the radiotherapy planning time. Methods We used a publicly available multicenter medical dataset (GLIS-RT, 230 patients) from the Cancer Imaging Archive. To improve the models generalization, we consider different imaging protocols and patients with various brain tumor types, including metastases. The proposed MedFusionGAN consisted of one generator network and one discriminator network trained in an adversarial scenario. Content, style, and L1 losses were used for training the generator to preserve the texture and structure information of the MRI and CT images. Results The MedFusionGAN successfully generates fused images with MRI soft-tissue and CT bone contrast. The results of the MedFusionGAN were quantitatively and qualitatively compared with seven traditional and eight deep learning (DL) state-of-the-art methods. Qualitatively, our method fused the source images with the highest spatial resolution without adding the image artifacts. We reported nine quantitative metrics to quantify the preservation of structural similarity, contrast, distortion level, and image edges in fused images. Our method outperformed both traditional and DL methods on six out of nine metrics. And it got the second performance rank for three and two quantitative metrics when compared with traditional and DL methods, respectively. To compare soft-tissue contrast, intensity profile along tumor and tumor contours of the fusion methods were evaluated. MedFusionGAN provides a more consistent, better intensity profile, and a better segmentation performance. Conclusions The proposed end-to-end unsupervised method successfully fused MRI and CT images. The fused image could improve targets and OARs delineation, which is an important aspect of radiotherapy treatment planning.https://doi.org/10.1186/s12880-023-01160-wIGARTDeep learningMRIBrain tumor
spellingShingle Mojtaba Safari
Ali Fatemi
Louis Archambault
MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network
BMC Medical Imaging
IGART
Deep learning
MRI
Brain tumor
title MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network
title_full MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network
title_fullStr MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network
title_full_unstemmed MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network
title_short MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network
title_sort medfusiongan multimodal medical image fusion using an unsupervised deep generative adversarial network
topic IGART
Deep learning
MRI
Brain tumor
url https://doi.org/10.1186/s12880-023-01160-w
work_keys_str_mv AT mojtabasafari medfusionganmultimodalmedicalimagefusionusinganunsuperviseddeepgenerativeadversarialnetwork
AT alifatemi medfusionganmultimodalmedicalimagefusionusinganunsuperviseddeepgenerativeadversarialnetwork
AT louisarchambault medfusionganmultimodalmedicalimagefusionusinganunsuperviseddeepgenerativeadversarialnetwork