Korean Sign Language Recognition Using Transformer-Based Deep Neural Network

Sign language recognition (SLR) is one of the crucial applications of the hand gesture recognition and computer vision research domain. There are many researchers who have been working to develop a hand gesture-based SLR application for English, Turkey, Arabic, and other sign languages. However, few...

Full description

Bibliographic Details
Main Authors: Jungpil Shin, Abu Saleh Musa Miah, Md. Al Mehedi Hasan, Koki Hirooka, Kota Suzuki, Hyoun-Sup Lee, Si-Woong Jang
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/13/5/3029
_version_ 1797615711596576768
author Jungpil Shin
Abu Saleh Musa Miah
Md. Al Mehedi Hasan
Koki Hirooka
Kota Suzuki
Hyoun-Sup Lee
Si-Woong Jang
author_facet Jungpil Shin
Abu Saleh Musa Miah
Md. Al Mehedi Hasan
Koki Hirooka
Kota Suzuki
Hyoun-Sup Lee
Si-Woong Jang
author_sort Jungpil Shin
collection DOAJ
description Sign language recognition (SLR) is one of the crucial applications of the hand gesture recognition and computer vision research domain. There are many researchers who have been working to develop a hand gesture-based SLR application for English, Turkey, Arabic, and other sign languages. However, few studies have been conducted on Korean sign language classification because few KSL datasets are publicly available. In addition, the existing Korean sign language recognition work still faces challenges in being conducted efficiently because light illumination and background complexity are the major problems in this field. In the last decade, researchers successfully applied a vision-based transformer for recognizing sign language by extracting long-range dependency within the image. Moreover, there is a significant gap between the CNN and transformer in terms of the performance and efficiency of the model. In addition, we have not found a combination of CNN and transformer-based Korean sign language recognition models yet. To overcome the challenges, we proposed a convolution and transformer-based multi-branch network aiming to take advantage of the long-range dependencies computation of the transformer and local feature calculation of the CNN for sign language recognition. We extracted initial features with the grained model and then parallelly extracted features from the transformer and CNN. After concatenating the local and long-range dependencies features, a new classification module was applied for the classification. We evaluated the proposed model with a KSL benchmark dataset and our lab dataset, where our model achieved 89.00% accuracy for 77 label KSL dataset and 98.30% accuracy for the lab dataset. The higher performance proves that the proposed model can achieve a generalized property with considerably less computational cost.
first_indexed 2024-03-11T07:30:33Z
format Article
id doaj.art-3b78090b6f06411f9f2dc58a7329af3c
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-03-11T07:30:33Z
publishDate 2023-02-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-3b78090b6f06411f9f2dc58a7329af3c2023-11-17T07:18:20ZengMDPI AGApplied Sciences2076-34172023-02-01135302910.3390/app13053029Korean Sign Language Recognition Using Transformer-Based Deep Neural NetworkJungpil Shin0Abu Saleh Musa Miah1Md. Al Mehedi Hasan2Koki Hirooka3Kota Suzuki4Hyoun-Sup Lee5Si-Woong Jang6School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 965-8580, Fukushima, JapanSchool of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 965-8580, Fukushima, JapanDepartment of Computer Science and Engineering, Rajshahi University of Engineering and Technology (RUET), Rajshahi 6204, BangladeshSchool of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 965-8580, Fukushima, JapanSchool of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 965-8580, Fukushima, JapanDepartment of Applied Software Engineering, Dongeui University, Busanjin-Gu, Busan 47340, Republic of KoreaDepartment of Computer Engineering, Dongeui University, Busanjin-Gu, Busan 47340, Republic of KoreaSign language recognition (SLR) is one of the crucial applications of the hand gesture recognition and computer vision research domain. There are many researchers who have been working to develop a hand gesture-based SLR application for English, Turkey, Arabic, and other sign languages. However, few studies have been conducted on Korean sign language classification because few KSL datasets are publicly available. In addition, the existing Korean sign language recognition work still faces challenges in being conducted efficiently because light illumination and background complexity are the major problems in this field. In the last decade, researchers successfully applied a vision-based transformer for recognizing sign language by extracting long-range dependency within the image. Moreover, there is a significant gap between the CNN and transformer in terms of the performance and efficiency of the model. In addition, we have not found a combination of CNN and transformer-based Korean sign language recognition models yet. To overcome the challenges, we proposed a convolution and transformer-based multi-branch network aiming to take advantage of the long-range dependencies computation of the transformer and local feature calculation of the CNN for sign language recognition. We extracted initial features with the grained model and then parallelly extracted features from the transformer and CNN. After concatenating the local and long-range dependencies features, a new classification module was applied for the classification. We evaluated the proposed model with a KSL benchmark dataset and our lab dataset, where our model achieved 89.00% accuracy for 77 label KSL dataset and 98.30% accuracy for the lab dataset. The higher performance proves that the proposed model can achieve a generalized property with considerably less computational cost.https://www.mdpi.com/2076-3417/13/5/3029korean sign language (KSL)transformerconvolutional neural network (CNN)sign language recognition (SLR)hand gesture recognition (HGR)
spellingShingle Jungpil Shin
Abu Saleh Musa Miah
Md. Al Mehedi Hasan
Koki Hirooka
Kota Suzuki
Hyoun-Sup Lee
Si-Woong Jang
Korean Sign Language Recognition Using Transformer-Based Deep Neural Network
Applied Sciences
korean sign language (KSL)
transformer
convolutional neural network (CNN)
sign language recognition (SLR)
hand gesture recognition (HGR)
title Korean Sign Language Recognition Using Transformer-Based Deep Neural Network
title_full Korean Sign Language Recognition Using Transformer-Based Deep Neural Network
title_fullStr Korean Sign Language Recognition Using Transformer-Based Deep Neural Network
title_full_unstemmed Korean Sign Language Recognition Using Transformer-Based Deep Neural Network
title_short Korean Sign Language Recognition Using Transformer-Based Deep Neural Network
title_sort korean sign language recognition using transformer based deep neural network
topic korean sign language (KSL)
transformer
convolutional neural network (CNN)
sign language recognition (SLR)
hand gesture recognition (HGR)
url https://www.mdpi.com/2076-3417/13/5/3029
work_keys_str_mv AT jungpilshin koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork
AT abusalehmusamiah koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork
AT mdalmehedihasan koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork
AT kokihirooka koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork
AT kotasuzuki koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork
AT hyounsuplee koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork
AT siwoongjang koreansignlanguagerecognitionusingtransformerbaseddeepneuralnetwork