Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) ha...

Full description

Bibliographic Details
Main Authors: Boon Giin Lee, Teak-Wei Chong, Wan-Young Chung
Format: Article
Language:English
Published: MDPI AG 2020-11-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/21/6256
_version_ 1797549011861766144
author Boon Giin Lee
Teak-Wei Chong
Wan-Young Chung
author_facet Boon Giin Lee
Teak-Wei Chong
Wan-Young Chung
author_sort Boon Giin Lee
collection DOAJ
description Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.
first_indexed 2024-03-10T15:07:44Z
format Article
id doaj.art-1f3b5ab806df41cbb198cfcff9d53a44
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T15:07:44Z
publishDate 2020-11-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-1f3b5ab806df41cbb198cfcff9d53a442023-11-20T19:34:24ZengMDPI AGSensors1424-82202020-11-012021625610.3390/s20216256Sensor Fusion of Motion-Based Sign Language Interpretation with Deep LearningBoon Giin Lee0Teak-Wei Chong1Wan-Young Chung2School of Computer Science, The University of Nottingham Ningbo China, Ningbo 315100, ChinaDepartment of Electronic Engineering, Keimyung University, Daegu 42601, KoreaDepartment of Electronic Engineering, Pukyong National University, Busan 48513, KoreaSign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.https://www.mdpi.com/1424-8220/20/21/6256deep learninghuman-computer interactionmotion sensorsensor fusionsign language recognitionwearable computing
spellingShingle Boon Giin Lee
Teak-Wei Chong
Wan-Young Chung
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
Sensors
deep learning
human-computer interaction
motion sensor
sensor fusion
sign language recognition
wearable computing
title Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_full Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_fullStr Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_full_unstemmed Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_short Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_sort sensor fusion of motion based sign language interpretation with deep learning
topic deep learning
human-computer interaction
motion sensor
sensor fusion
sign language recognition
wearable computing
url https://www.mdpi.com/1424-8220/20/21/6256
work_keys_str_mv AT boongiinlee sensorfusionofmotionbasedsignlanguageinterpretationwithdeeplearning
AT teakweichong sensorfusionofmotionbasedsignlanguageinterpretationwithdeeplearning
AT wanyoungchung sensorfusionofmotionbasedsignlanguageinterpretationwithdeeplearning