LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion

Medical image fusion, which aims to derive complementary information from multi-modality medical images, plays an important role in many clinical applications, such as medical diagnostics and treatment. We propose the LatLRR-FCNs, which is a hybrid medical image fusion framework consisting of the la...

Full description

Bibliographic Details
Main Authors: Zhengyuan Xu, Wentao Xiang, Songsheng Zhu, Rui Zeng, Cesar Marquez-Chin, Zhen Chen, Xianqing Chen, Bin Liu, Jianqing Li
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-01-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnins.2020.615435/full
_version_ 1818579098307395584
author Zhengyuan Xu
Zhengyuan Xu
Wentao Xiang
Songsheng Zhu
Rui Zeng
Cesar Marquez-Chin
Zhen Chen
Xianqing Chen
Bin Liu
Jianqing Li
author_facet Zhengyuan Xu
Zhengyuan Xu
Wentao Xiang
Songsheng Zhu
Rui Zeng
Cesar Marquez-Chin
Zhen Chen
Xianqing Chen
Bin Liu
Jianqing Li
author_sort Zhengyuan Xu
collection DOAJ
description Medical image fusion, which aims to derive complementary information from multi-modality medical images, plays an important role in many clinical applications, such as medical diagnostics and treatment. We propose the LatLRR-FCNs, which is a hybrid medical image fusion framework consisting of the latent low-rank representation (LatLRR) and the fully convolutional networks (FCNs). Specifically, the LatLRR module is used to decompose the multi-modality medical images into low-rank and saliency components, which can provide fine-grained details and preserve energies, respectively. The FCN module aims to preserve both global and local information by generating the weighting maps for each modality image. The final weighting map is obtained using the weighted local energy and the weighted sum of the eight-neighborhood-based modified Laplacian method. The fused low-rank component is generated by combining the low-rank components of each modality image according to the guidance provided by the final weighting map within pyramid-based fusion. A simple sum strategy is used for the saliency components. The usefulness and efficiency of the proposed framework are thoroughly evaluated on four medical image fusion tasks, including computed tomography (CT) and magnetic resonance (MR), T1- and T2-weighted MR, positron emission tomography and MR, and single-photon emission CT and MR. The results demonstrate that by leveraging the LatLRR for image detail extraction and the FCNs for global and local information description, we can achieve performance superior to the state-of-the-art methods in terms of both objective assessment and visual quality in some cases. Furthermore, our method has a competitive performance in terms of computational costs compared to other baselines.
first_indexed 2024-12-16T06:56:18Z
format Article
id doaj.art-97d4c6bd9ffd4733b2cb3f2178dc611a
institution Directory Open Access Journal
issn 1662-453X
language English
last_indexed 2024-12-16T06:56:18Z
publishDate 2021-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Neuroscience
spelling doaj.art-97d4c6bd9ffd4733b2cb3f2178dc611a2022-12-21T22:40:16ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2021-01-011410.3389/fnins.2020.615435615435LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image FusionZhengyuan Xu0Zhengyuan Xu1Wentao Xiang2Songsheng Zhu3Rui Zeng4Cesar Marquez-Chin5Zhen Chen6Xianqing Chen7Bin Liu8Jianqing Li9The Key Laboratory of Clinical and Medical Engineering, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, ChinaThe Department of Medical Engineering, Wannan Medical College, Wuhu, ChinaThe Key Laboratory of Clinical and Medical Engineering, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, ChinaThe Key Laboratory of Clinical and Medical Engineering, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, ChinaThe Brain and Mind Centre, The University of Sydney, Sydney, NSW, AustraliaThe KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, CanadaThe Key Laboratory of Clinical and Medical Engineering, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, ChinaThe Department of Electrical Engineering, College of Engineering, Zhejiang Normal University, Jinhua, ChinaThe Key Laboratory of Clinical and Medical Engineering, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, ChinaThe Key Laboratory of Clinical and Medical Engineering, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, ChinaMedical image fusion, which aims to derive complementary information from multi-modality medical images, plays an important role in many clinical applications, such as medical diagnostics and treatment. We propose the LatLRR-FCNs, which is a hybrid medical image fusion framework consisting of the latent low-rank representation (LatLRR) and the fully convolutional networks (FCNs). Specifically, the LatLRR module is used to decompose the multi-modality medical images into low-rank and saliency components, which can provide fine-grained details and preserve energies, respectively. The FCN module aims to preserve both global and local information by generating the weighting maps for each modality image. The final weighting map is obtained using the weighted local energy and the weighted sum of the eight-neighborhood-based modified Laplacian method. The fused low-rank component is generated by combining the low-rank components of each modality image according to the guidance provided by the final weighting map within pyramid-based fusion. A simple sum strategy is used for the saliency components. The usefulness and efficiency of the proposed framework are thoroughly evaluated on four medical image fusion tasks, including computed tomography (CT) and magnetic resonance (MR), T1- and T2-weighted MR, positron emission tomography and MR, and single-photon emission CT and MR. The results demonstrate that by leveraging the LatLRR for image detail extraction and the FCNs for global and local information description, we can achieve performance superior to the state-of-the-art methods in terms of both objective assessment and visual quality in some cases. Furthermore, our method has a competitive performance in terms of computational costs compared to other baselines.https://www.frontiersin.org/articles/10.3389/fnins.2020.615435/fullmulti-modality medical imagelatent low-rank representationfully convolutional networksmedical image fusionLaplacian pyramid
spellingShingle Zhengyuan Xu
Zhengyuan Xu
Wentao Xiang
Songsheng Zhu
Rui Zeng
Cesar Marquez-Chin
Zhen Chen
Xianqing Chen
Bin Liu
Jianqing Li
LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
Frontiers in Neuroscience
multi-modality medical image
latent low-rank representation
fully convolutional networks
medical image fusion
Laplacian pyramid
title LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_full LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_fullStr LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_full_unstemmed LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_short LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_sort latlrr fcns latent low rank representation with fully convolutional networks for medical image fusion
topic multi-modality medical image
latent low-rank representation
fully convolutional networks
medical image fusion
Laplacian pyramid
url https://www.frontiersin.org/articles/10.3389/fnins.2020.615435/full
work_keys_str_mv AT zhengyuanxu latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT zhengyuanxu latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT wentaoxiang latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT songshengzhu latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT ruizeng latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT cesarmarquezchin latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT zhenchen latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT xianqingchen latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT binliu latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT jianqingli latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion