Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion

Sign Language Recognition (SLR) systems are crucial bridges facilitating communication between deaf or hard-of-hearing individuals and the hearing world. Existing SLR technologies, while advancing, often grapple with challenges such as accurately capturing the dynamic and complex nature of sign lang...

Full description

Bibliographic Details
Main Authors: Ali Akdag, Omer Kaan Baykan
Format: Article
Language:English
Published: MDPI AG 2024-03-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/13/7/1188
_version_ 1797212742627622912
author Ali Akdag
Omer Kaan Baykan
author_facet Ali Akdag
Omer Kaan Baykan
author_sort Ali Akdag
collection DOAJ
description Sign Language Recognition (SLR) systems are crucial bridges facilitating communication between deaf or hard-of-hearing individuals and the hearing world. Existing SLR technologies, while advancing, often grapple with challenges such as accurately capturing the dynamic and complex nature of sign language, which includes both manual and non-manual elements like facial expressions and body movements. These systems sometimes fall short in environments with different backgrounds or lighting conditions, hindering their practical applicability and robustness. This study introduces an innovative approach to isolated sign language word recognition using a novel deep learning model that combines the strengths of both residual three-dimensional (R3D) and temporally separated (R(2+1)D) convolutional blocks. The R3(2+1)D-SLR network model demonstrates a superior ability to capture the intricate spatial and temporal features crucial for accurate sign recognition. Our system combines data from the signer’s body, hands, and face, extracted using the R3(2+1)D-SLR model, and employs a Support Vector Machine (SVM) for classification. It demonstrates remarkable improvements in accuracy and robustness across various backgrounds by utilizing pose data over RGB data. With this pose-based approach, our proposed system achieved 94.52% and 98.53% test accuracy in signer-independent evaluations on the BosphorusSign22k-general and LSA64 datasets.
first_indexed 2024-04-24T10:47:13Z
format Article
id doaj.art-eed188a64c984e6b9e053fbd0dbc2607
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-04-24T10:47:13Z
publishDate 2024-03-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-eed188a64c984e6b9e053fbd0dbc26072024-04-12T13:17:01ZengMDPI AGElectronics2079-92922024-03-01137118810.3390/electronics13071188Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature FusionAli Akdag0Omer Kaan Baykan1Department of Computer Engineering, Tokat Gaziosmanpaşa University, Taşlıçiftlik Campus, 60250 Tokat, TürkiyeDepartment of Computer Engineering, Konya Technical University, 42250 Konya, TürkiyeSign Language Recognition (SLR) systems are crucial bridges facilitating communication between deaf or hard-of-hearing individuals and the hearing world. Existing SLR technologies, while advancing, often grapple with challenges such as accurately capturing the dynamic and complex nature of sign language, which includes both manual and non-manual elements like facial expressions and body movements. These systems sometimes fall short in environments with different backgrounds or lighting conditions, hindering their practical applicability and robustness. This study introduces an innovative approach to isolated sign language word recognition using a novel deep learning model that combines the strengths of both residual three-dimensional (R3D) and temporally separated (R(2+1)D) convolutional blocks. The R3(2+1)D-SLR network model demonstrates a superior ability to capture the intricate spatial and temporal features crucial for accurate sign recognition. Our system combines data from the signer’s body, hands, and face, extracted using the R3(2+1)D-SLR model, and employs a Support Vector Machine (SVM) for classification. It demonstrates remarkable improvements in accuracy and robustness across various backgrounds by utilizing pose data over RGB data. With this pose-based approach, our proposed system achieved 94.52% and 98.53% test accuracy in signer-independent evaluations on the BosphorusSign22k-general and LSA64 datasets.https://www.mdpi.com/2079-9292/13/7/1188sign language recognitiondeep learningfeature fusion
spellingShingle Ali Akdag
Omer Kaan Baykan
Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion
Electronics
sign language recognition
deep learning
feature fusion
title Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion
title_full Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion
title_fullStr Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion
title_full_unstemmed Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion
title_short Enhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusion
title_sort enhancing signer independent recognition of isolated sign language through advanced deep learning techniques and feature fusion
topic sign language recognition
deep learning
feature fusion
url https://www.mdpi.com/2079-9292/13/7/1188
work_keys_str_mv AT aliakdag enhancingsignerindependentrecognitionofisolatedsignlanguagethroughadvanceddeeplearningtechniquesandfeaturefusion
AT omerkaanbaykan enhancingsignerindependentrecognitionofisolatedsignlanguagethroughadvanceddeeplearningtechniquesandfeaturefusion