Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction

Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the p...

Full description

Bibliographic Details
Main Authors: Jyostna Devi Bodapati, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta, Ohyun Jo
Format: Article
Language:English
Published: MDPI AG 2020-05-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/9/6/914
_version_ 1797566622711414784
author Jyostna Devi Bodapati
Veeranjaneyulu Naralasetti
Shaik Nagur Shareef
Saqib Hakak
Muhammad Bilal
Praveen Kumar Reddy Maddikunta
Ohyun Jo
author_facet Jyostna Devi Bodapati
Veeranjaneyulu Naralasetti
Shaik Nagur Shareef
Saqib Hakak
Muhammad Bilal
Praveen Kumar Reddy Maddikunta
Ohyun Jo
author_sort Jyostna Devi Bodapati
collection DOAJ
description Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.
first_indexed 2024-03-10T19:30:00Z
format Article
id doaj.art-cb1747bd8aea404f95a3e1adfc70d968
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-03-10T19:30:00Z
publishDate 2020-05-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-cb1747bd8aea404f95a3e1adfc70d9682023-11-20T02:15:27ZengMDPI AGElectronics2079-92922020-05-019691410.3390/electronics9060914Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity PredictionJyostna Devi Bodapati0Veeranjaneyulu Naralasetti1Shaik Nagur Shareef2Saqib Hakak3Muhammad Bilal4Praveen Kumar Reddy Maddikunta5Ohyun Jo6Department of CSE, Vignan’s Foundation for Science Technology and Research, Guntur 522213, IndiaDepartment of IT, Vignan’s Foundation for Science Technology and Research, Guntur 522213, IndiaDepartment of CSE, Vignan’s Foundation for Science Technology and Research, Guntur 522213, IndiaFaculty of Computer Science, University of Northern British Columbia, Prince George, BC V2N 4Z9, CanadaDepartment of Computer and Electronics Systems Engineering, Hankuk University of Foreign Studies, Yongin-si 17035, KoreaSchool of Information Technology and Engineering, The Vellore Institute of Technology (VIT), Vellore 632014, IndiaDepartment of Computer Science, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, KoreaDiabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.https://www.mdpi.com/2079-9292/9/6/914diabetic retinopathy (DR)pre-trained deep ConvNetuni-modal deep featuresmulti-modal deep featurestransfer learning1D pooling
spellingShingle Jyostna Devi Bodapati
Veeranjaneyulu Naralasetti
Shaik Nagur Shareef
Saqib Hakak
Muhammad Bilal
Praveen Kumar Reddy Maddikunta
Ohyun Jo
Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction
Electronics
diabetic retinopathy (DR)
pre-trained deep ConvNet
uni-modal deep features
multi-modal deep features
transfer learning
1D pooling
title Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction
title_full Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction
title_fullStr Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction
title_full_unstemmed Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction
title_short Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction
title_sort blended multi modal deep convnet features for diabetic retinopathy severity prediction
topic diabetic retinopathy (DR)
pre-trained deep ConvNet
uni-modal deep features
multi-modal deep features
transfer learning
1D pooling
url https://www.mdpi.com/2079-9292/9/6/914
work_keys_str_mv AT jyostnadevibodapati blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction
AT veeranjaneyulunaralasetti blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction
AT shaiknagurshareef blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction
AT saqibhakak blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction
AT muhammadbilal blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction
AT praveenkumarreddymaddikunta blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction
AT ohyunjo blendedmultimodaldeepconvnetfeaturesfordiabeticretinopathyseverityprediction