Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram
In this day and age, depression is still one of the biggest problems in the world. If left untreated, it can lead to suicidal thoughts and attempts. There is a need for proper diagnoses of Major Depressive Disorder (MDD) and evaluation of the early stages to stop the side effects. Early detection is...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-12-01
|
Series: | Diagnostics |
Subjects: | |
Online Access: | https://www.mdpi.com/2075-4418/14/1/43 |
_version_ | 1797359000332795904 |
---|---|
author | Chetna Gupta Vikas Khullar Nitin Goyal Kirti Saini Ritu Baniwal Sushil Kumar Rashi Rastogi |
author_facet | Chetna Gupta Vikas Khullar Nitin Goyal Kirti Saini Ritu Baniwal Sushil Kumar Rashi Rastogi |
author_sort | Chetna Gupta |
collection | DOAJ |
description | In this day and age, depression is still one of the biggest problems in the world. If left untreated, it can lead to suicidal thoughts and attempts. There is a need for proper diagnoses of Major Depressive Disorder (MDD) and evaluation of the early stages to stop the side effects. Early detection is critical to identify a variety of serious conditions. In order to provide safe and effective protection to MDD patients, it is crucial to automate diagnoses and make decision-making tools widely available. Although there are various classification systems for the diagnosis of MDD, no reliable, secure method that meets these requirements has been established to date. In this paper, a federated deep learning-based multimodal system for MDD classification using electroencephalography (EEG) and audio datasets is presented while meeting data privacy requirements. The performance of the federated learning (FL) model was tested on independent and identically distributed (IID) and non-IID data. The study began by extracting features from several pre-trained models and ultimately decided to use bidirectional short-term memory (Bi-LSTM) as the base model, as it had the highest validation accuracy of 91% compared to a convolutional neural network and LSTM with 85% and 89% validation accuracy on audio data, respectively. The Bi-LSTM model also achieved a validation accuracy of 98.9% for EEG data. The FL method was then used to perform experiments on IID and non-IID datasets. The FL-based multimodal model achieved an exceptional training and validation accuracy of 99.9% when trained and evaluated on both IID and non-IIID datasets. These results show that the FL multimodal system performs almost as well as the Bi-LSTM multimodal system and emphasize its suitability for processing IID and non-IIID data. Several clients were found to perform better than conventional pre-trained models in a multimodal framework for federated learning using EEG and audio datasets. The proposed framework stands out from other classification techniques for MDD due to its special features, such as multimodality and data privacy for edge machines with limited resources. Due to these additional features, the framework concept is the most suitable alternative approach for the early classification of MDD patients. |
first_indexed | 2024-03-08T15:09:19Z |
format | Article |
id | doaj.art-5903cc9eaea44bfaac9fd653f5d83e4a |
institution | Directory Open Access Journal |
issn | 2075-4418 |
language | English |
last_indexed | 2024-03-08T15:09:19Z |
publishDate | 2023-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Diagnostics |
spelling | doaj.art-5903cc9eaea44bfaac9fd653f5d83e4a2024-01-10T14:53:44ZengMDPI AGDiagnostics2075-44182023-12-011414310.3390/diagnostics14010043Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and ElectroencephalogramChetna Gupta0Vikas Khullar1Nitin Goyal2Kirti Saini3Ritu Baniwal4Sushil Kumar5Rashi Rastogi6Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140417, Punjab, IndiaChitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140417, Punjab, IndiaDepartment of Computer Science and Engineering, School of Engineering and Technology, Central University of Haryana, Mahendergarh 123031, Haryana, IndiaDepartment of Electronics and Communication Engineering, University Institute of Engineering and Technology, Kurukshetra University, Kurukshetra 136119, Haryana, IndiaDepartment of Computer Science, Jyotiba Phule Government College, Radaur, Yamunanagar 135133, Haryana, IndiaDepartment of Computer Science and Engineering, School of Engineering and Technology, Central University of Haryana, Mahendergarh 123031, Haryana, IndiaDepartment of Computer Applications, Sir Chottu Ram Institute of Engineering & Technology, Ch. Charan Singh University, Meerut 250001, Uttar Pradesh, IndiaIn this day and age, depression is still one of the biggest problems in the world. If left untreated, it can lead to suicidal thoughts and attempts. There is a need for proper diagnoses of Major Depressive Disorder (MDD) and evaluation of the early stages to stop the side effects. Early detection is critical to identify a variety of serious conditions. In order to provide safe and effective protection to MDD patients, it is crucial to automate diagnoses and make decision-making tools widely available. Although there are various classification systems for the diagnosis of MDD, no reliable, secure method that meets these requirements has been established to date. In this paper, a federated deep learning-based multimodal system for MDD classification using electroencephalography (EEG) and audio datasets is presented while meeting data privacy requirements. The performance of the federated learning (FL) model was tested on independent and identically distributed (IID) and non-IID data. The study began by extracting features from several pre-trained models and ultimately decided to use bidirectional short-term memory (Bi-LSTM) as the base model, as it had the highest validation accuracy of 91% compared to a convolutional neural network and LSTM with 85% and 89% validation accuracy on audio data, respectively. The Bi-LSTM model also achieved a validation accuracy of 98.9% for EEG data. The FL method was then used to perform experiments on IID and non-IID datasets. The FL-based multimodal model achieved an exceptional training and validation accuracy of 99.9% when trained and evaluated on both IID and non-IIID datasets. These results show that the FL multimodal system performs almost as well as the Bi-LSTM multimodal system and emphasize its suitability for processing IID and non-IIID data. Several clients were found to perform better than conventional pre-trained models in a multimodal framework for federated learning using EEG and audio datasets. The proposed framework stands out from other classification techniques for MDD due to its special features, such as multimodality and data privacy for edge machines with limited resources. Due to these additional features, the framework concept is the most suitable alternative approach for the early classification of MDD patients.https://www.mdpi.com/2075-4418/14/1/43major depressive disorderfederated learningdeep learningBi-LSTMIIDsnon-IIDs |
spellingShingle | Chetna Gupta Vikas Khullar Nitin Goyal Kirti Saini Ritu Baniwal Sushil Kumar Rashi Rastogi Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram Diagnostics major depressive disorder federated learning deep learning Bi-LSTM IIDs non-IIDs |
title | Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram |
title_full | Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram |
title_fullStr | Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram |
title_full_unstemmed | Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram |
title_short | Cross-Silo, Privacy-Preserving, and Lightweight Federated Multimodal System for the Identification of Major Depressive Disorder Using Audio and Electroencephalogram |
title_sort | cross silo privacy preserving and lightweight federated multimodal system for the identification of major depressive disorder using audio and electroencephalogram |
topic | major depressive disorder federated learning deep learning Bi-LSTM IIDs non-IIDs |
url | https://www.mdpi.com/2075-4418/14/1/43 |
work_keys_str_mv | AT chetnagupta crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram AT vikaskhullar crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram AT nitingoyal crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram AT kirtisaini crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram AT ritubaniwal crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram AT sushilkumar crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram AT rashirastogi crosssiloprivacypreservingandlightweightfederatedmultimodalsystemfortheidentificationofmajordepressivedisorderusingaudioandelectroencephalogram |