Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model
Every one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes diff...
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/16/7156 |
_version_ | 1797583262225268736 |
---|---|
author | Kanchon Kanti Podder Maymouna Ezeddin Muhammad E. H. Chowdhury Md. Shaheenur Islam Sumon Anas M. Tahir Mohamed Arselene Ayari Proma Dutta Amith Khandakar Zaid Bin Mahbub Muhammad Abdul Kadir |
author_facet | Kanchon Kanti Podder Maymouna Ezeddin Muhammad E. H. Chowdhury Md. Shaheenur Islam Sumon Anas M. Tahir Mohamed Arselene Ayari Proma Dutta Amith Khandakar Zaid Bin Mahbub Muhammad Abdul Kadir |
author_sort | Kanchon Kanti Podder |
collection | DOAJ |
description | Every one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face–hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron “SelfMLP” is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face–hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy. |
first_indexed | 2024-03-10T23:35:26Z |
format | Article |
id | doaj.art-301bdcb2c3c34e6b82d04a16f11fb8aa |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T23:35:26Z |
publishDate | 2023-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-301bdcb2c3c34e6b82d04a16f11fb8aa2023-11-19T02:57:36ZengMDPI AGSensors1424-82202023-08-012316715610.3390/s23167156Signer-Independent Arabic Sign Language Recognition System Using Deep Learning ModelKanchon Kanti Podder0Maymouna Ezeddin1Muhammad E. H. Chowdhury2Md. Shaheenur Islam Sumon3Anas M. Tahir4Mohamed Arselene Ayari5Proma Dutta6Amith Khandakar7Zaid Bin Mahbub8Muhammad Abdul Kadir9Department of Biomedical Physics & Technology, University of Dhaka, Dhaka 1000, BangladeshDepartment of Computer Science, Hamad Bin Khalifa University, Doha 34110, QatarDepartment of Electrical Engineering, Qatar University, Doha 2713, QatarDepartment of Biomedical Engineering, Military Institute of Science and Technology (MIST), Dhaka 1216, BangladeshDepartment of Electrical Engineering, Qatar University, Doha 2713, QatarDepartment of Civil and Architectural Engineering, Qatar University, Doha 2713, QatarDepartment of Electrical& Electronic Engineering, Chittagong University of Engineering & Technology, Chittagong 4349, BangladeshDepartment of Electrical Engineering, Qatar University, Doha 2713, QatarDepartment of Mathematics and Physics, North South University, Dhaka 1229, BangladeshDepartment of Biomedical Physics & Technology, University of Dhaka, Dhaka 1000, BangladeshEvery one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face–hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron “SelfMLP” is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face–hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.https://www.mdpi.com/1424-8220/23/16/7156Arabic Sign Languagedeep learningdynamic sign languagesegmentationMediaPipe |
spellingShingle | Kanchon Kanti Podder Maymouna Ezeddin Muhammad E. H. Chowdhury Md. Shaheenur Islam Sumon Anas M. Tahir Mohamed Arselene Ayari Proma Dutta Amith Khandakar Zaid Bin Mahbub Muhammad Abdul Kadir Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model Sensors Arabic Sign Language deep learning dynamic sign language segmentation MediaPipe |
title | Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model |
title_full | Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model |
title_fullStr | Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model |
title_full_unstemmed | Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model |
title_short | Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model |
title_sort | signer independent arabic sign language recognition system using deep learning model |
topic | Arabic Sign Language deep learning dynamic sign language segmentation MediaPipe |
url | https://www.mdpi.com/1424-8220/23/16/7156 |
work_keys_str_mv | AT kanchonkantipodder signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT maymounaezeddin signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT muhammadehchowdhury signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT mdshaheenurislamsumon signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT anasmtahir signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT mohamedarseleneayari signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT promadutta signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT amithkhandakar signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT zaidbinmahbub signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel AT muhammadabdulkadir signerindependentarabicsignlanguagerecognitionsystemusingdeeplearningmodel |