Incorporating a Novel Dual Transfer Learning Approach for Medical Images
Recently, transfer learning approaches appeared to reduce the need for many classified medical images. However, these approaches still contain some limitations due to the mismatch of the domain between the source domain and the target domain. Therefore, this study aims to propose a novel approach, c...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/2/570 |
_version_ | 1797437345409007616 |
---|---|
author | Abdulrahman Abbas Mukhlif Belal Al-Khateeb Mazin Abed Mohammed |
author_facet | Abdulrahman Abbas Mukhlif Belal Al-Khateeb Mazin Abed Mohammed |
author_sort | Abdulrahman Abbas Mukhlif |
collection | DOAJ |
description | Recently, transfer learning approaches appeared to reduce the need for many classified medical images. However, these approaches still contain some limitations due to the mismatch of the domain between the source domain and the target domain. Therefore, this study aims to propose a novel approach, called Dual Transfer Learning (DTL), based on the convergence of patterns between the source and target domains. The proposed approach is applied to four pre-trained models (VGG16, Xception, ResNet50, MobileNetV2) using two datasets: ISIC2020 skin cancer images and ICIAR2018 breast cancer images, by fine-tuning the last layers on a sufficient number of unclassified images of the same disease and on a small number of classified images of the target task, in addition to using data augmentation techniques to balance classes and to increase the number of samples. According to the obtained results, it has been experimentally proven that the proposed approach has improved the performance of all models, where without data augmentation, the performance of the VGG16 model, Xception model, ResNet50 model, and MobileNetV2 model are improved by 0.28%, 10.96%, 15.73%, and 10.4%, respectively, while, with data augmentation, the VGG16 model, Xception model, ResNet50 model, and MobileNetV2 model are improved by 19.66%, 34.76%, 31.76%, and 33.03%, respectively. The Xception model obtained the highest performance compared to the rest of the models when classifying skin cancer images in the ISIC2020 dataset, as it obtained 96.83%, 96.919%, 96.826%, 96.825%, 99.07%, and 94.58% for accuracy, precision, recall, F1-score, sensitivity, and specificity respectively. To classify the images of the ICIAR 2018 dataset for breast cancer, the Xception model obtained 99%, 99.003%, 98.995%, 99%, 98.55%, and 99.14% for accuracy, precision, recall, F1-score, sensitivity, and specificity, respectively. Through these results, the proposed approach improved the models’ performance when fine-tuning was performed on unclassified images of the same disease. |
first_indexed | 2024-03-09T11:18:52Z |
format | Article |
id | doaj.art-1cadcecbe4a04e469ec0e8e439915ab9 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T11:18:52Z |
publishDate | 2023-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-1cadcecbe4a04e469ec0e8e439915ab92023-12-01T00:23:52ZengMDPI AGSensors1424-82202023-01-0123257010.3390/s23020570Incorporating a Novel Dual Transfer Learning Approach for Medical ImagesAbdulrahman Abbas Mukhlif0Belal Al-Khateeb1Mazin Abed Mohammed2Computer Science Department, College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, IraqComputer Science Department, College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, IraqComputer Science Department, College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, IraqRecently, transfer learning approaches appeared to reduce the need for many classified medical images. However, these approaches still contain some limitations due to the mismatch of the domain between the source domain and the target domain. Therefore, this study aims to propose a novel approach, called Dual Transfer Learning (DTL), based on the convergence of patterns between the source and target domains. The proposed approach is applied to four pre-trained models (VGG16, Xception, ResNet50, MobileNetV2) using two datasets: ISIC2020 skin cancer images and ICIAR2018 breast cancer images, by fine-tuning the last layers on a sufficient number of unclassified images of the same disease and on a small number of classified images of the target task, in addition to using data augmentation techniques to balance classes and to increase the number of samples. According to the obtained results, it has been experimentally proven that the proposed approach has improved the performance of all models, where without data augmentation, the performance of the VGG16 model, Xception model, ResNet50 model, and MobileNetV2 model are improved by 0.28%, 10.96%, 15.73%, and 10.4%, respectively, while, with data augmentation, the VGG16 model, Xception model, ResNet50 model, and MobileNetV2 model are improved by 19.66%, 34.76%, 31.76%, and 33.03%, respectively. The Xception model obtained the highest performance compared to the rest of the models when classifying skin cancer images in the ISIC2020 dataset, as it obtained 96.83%, 96.919%, 96.826%, 96.825%, 99.07%, and 94.58% for accuracy, precision, recall, F1-score, sensitivity, and specificity respectively. To classify the images of the ICIAR 2018 dataset for breast cancer, the Xception model obtained 99%, 99.003%, 98.995%, 99%, 98.55%, and 99.14% for accuracy, precision, recall, F1-score, sensitivity, and specificity, respectively. Through these results, the proposed approach improved the models’ performance when fine-tuning was performed on unclassified images of the same disease.https://www.mdpi.com/1424-8220/23/2/570transfer learningfine-tuningdata augmentationskin cancerbreast cancerimbalanced datasets |
spellingShingle | Abdulrahman Abbas Mukhlif Belal Al-Khateeb Mazin Abed Mohammed Incorporating a Novel Dual Transfer Learning Approach for Medical Images Sensors transfer learning fine-tuning data augmentation skin cancer breast cancer imbalanced datasets |
title | Incorporating a Novel Dual Transfer Learning Approach for Medical Images |
title_full | Incorporating a Novel Dual Transfer Learning Approach for Medical Images |
title_fullStr | Incorporating a Novel Dual Transfer Learning Approach for Medical Images |
title_full_unstemmed | Incorporating a Novel Dual Transfer Learning Approach for Medical Images |
title_short | Incorporating a Novel Dual Transfer Learning Approach for Medical Images |
title_sort | incorporating a novel dual transfer learning approach for medical images |
topic | transfer learning fine-tuning data augmentation skin cancer breast cancer imbalanced datasets |
url | https://www.mdpi.com/1424-8220/23/2/570 |
work_keys_str_mv | AT abdulrahmanabbasmukhlif incorporatinganoveldualtransferlearningapproachformedicalimages AT belalalkhateeb incorporatinganoveldualtransferlearningapproachformedicalimages AT mazinabedmohammed incorporatinganoveldualtransferlearningapproachformedicalimages |