A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification

Abstract There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modaliti...

Full description

Bibliographic Details
Main Authors: Olaide N. Oyelade, Eric Aghiomesi Irunokhai, Hui Wang
Format: Article
Language:English
Published: Nature Portfolio 2024-01-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-024-51329-8
_version_ 1827388462328709120
author Olaide N. Oyelade
Eric Aghiomesi Irunokhai
Hui Wang
author_facet Olaide N. Oyelade
Eric Aghiomesi Irunokhai
Hui Wang
author_sort Olaide N. Oyelade
collection DOAJ
description Abstract There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.
first_indexed 2024-03-08T16:19:13Z
format Article
id doaj.art-374a82a73b5f4ac094d0f52d7f44fcbc
institution Directory Open Access Journal
issn 2045-2322
language English
last_indexed 2024-03-08T16:19:13Z
publishDate 2024-01-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj.art-374a82a73b5f4ac094d0f52d7f44fcbc2024-01-07T12:26:45ZengNature PortfolioScientific Reports2045-23222024-01-0114112310.1038/s41598-024-51329-8A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classificationOlaide N. Oyelade0Eric Aghiomesi Irunokhai1Hui Wang2School of Electronics, Electrical Engineering and Computer Science, Queen’s University BelfastDepartment of Computer Science, Federal College of Wildlife ManagementSchool of Electronics, Electrical Engineering and Computer Science, Queen’s University BelfastAbstract There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.https://doi.org/10.1038/s41598-024-51329-8
spellingShingle Olaide N. Oyelade
Eric Aghiomesi Irunokhai
Hui Wang
A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
Scientific Reports
title A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
title_full A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
title_fullStr A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
title_full_unstemmed A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
title_short A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
title_sort twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification
url https://doi.org/10.1038/s41598-024-51329-8
work_keys_str_mv AT olaidenoyelade atwinconvolutionalneuralnetworkwithhybridbinaryoptimizerformultimodalbreastcancerdigitalimageclassification
AT ericaghiomesiirunokhai atwinconvolutionalneuralnetworkwithhybridbinaryoptimizerformultimodalbreastcancerdigitalimageclassification
AT huiwang atwinconvolutionalneuralnetworkwithhybridbinaryoptimizerformultimodalbreastcancerdigitalimageclassification
AT olaidenoyelade twinconvolutionalneuralnetworkwithhybridbinaryoptimizerformultimodalbreastcancerdigitalimageclassification
AT ericaghiomesiirunokhai twinconvolutionalneuralnetworkwithhybridbinaryoptimizerformultimodalbreastcancerdigitalimageclassification
AT huiwang twinconvolutionalneuralnetworkwithhybridbinaryoptimizerformultimodalbreastcancerdigitalimageclassification