Smart Communication System Using Sign Language Interpretation

Although sign language has become more widely used in recent years, establishing effective communication between mute/deaf people and non-signers without a translator remains a barrier. There have been multiple methods proposed in the literature to overcome these challenges with the help of Sign Lan...

Full description

Bibliographic Details
Main Authors: Divyansh Bisht, Manthan Kojage, Manu Shukla, Yash Patil, Priyanka Bagade
Format: Article
Language:English
Published: FRUCT 2022-04-01
Series:Proceedings of the XXth Conference of Open Innovations Association FRUCT
Subjects:
Online Access:https://www.fruct.org/publications/fruct31/files/Bis.pdf
_version_ 1818231646003920896
author Divyansh Bisht
Manthan Kojage
Manu Shukla
Yash Patil
Priyanka Bagade
author_facet Divyansh Bisht
Manthan Kojage
Manu Shukla
Yash Patil
Priyanka Bagade
author_sort Divyansh Bisht
collection DOAJ
description Although sign language has become more widely used in recent years, establishing effective communication between mute/deaf people and non-signers without a translator remains a barrier. There have been multiple methods proposed in the literature to overcome these challenges with the help of Sign Language Recognition (SLR) using methods based on arm sensors, data glove and computer vision. However, the proposed sensor-based methods require users to wear additional devices such as arm bands and data-glove. The sensor-free vision-based methods are computationally intensive and sometimes less accurate as compared to the wearable sensor-based methods. In this paper, we propose a vision-based light weight web-based sign-language interpretation system. It provides two-way communication for all classes of people (deaf-and-mute, hard of hearing, visually impaired, and non-signers) and can be scaled commercially. The proposed method uses Mediapipe to extract hand features from the input image/video and then uses a light weight random forest classifier to classify the signs based on the extracted features with the accuracy of 94.69%. The proposed model is trained on alphabets from American Sign Language. We developed a web-based user interface to remove for ease of deployment. It is equipped with text-to-speech, speech-to-text and auto-correct features to support communication between deaf-and-mute, hard of hearing, visually impaired and non-signers.
first_indexed 2024-12-12T10:53:42Z
format Article
id doaj.art-06f78d3cac3548619adeb7c009126379
institution Directory Open Access Journal
issn 2305-7254
2343-0737
language English
last_indexed 2024-12-12T10:53:42Z
publishDate 2022-04-01
publisher FRUCT
record_format Article
series Proceedings of the XXth Conference of Open Innovations Association FRUCT
spelling doaj.art-06f78d3cac3548619adeb7c0091263792022-12-22T00:26:43ZengFRUCTProceedings of the XXth Conference of Open Innovations Association FRUCT2305-72542343-07372022-04-01311122010.23919/FRUCT54823.2022.9770914Smart Communication System Using Sign Language InterpretationDivyansh Bisht0Manthan Kojage1Manu Shukla2Yash Patil3Priyanka Bagade4Indian Institute of Technology Kanpur, IndiaIndian Institute of Technology Kanpur, IndiaIndian Institute of Technology Kanpur, IndiaIndian Institute of Technology Kanpur, IndiaIndian Institute of Technology Kanpur, IndiaAlthough sign language has become more widely used in recent years, establishing effective communication between mute/deaf people and non-signers without a translator remains a barrier. There have been multiple methods proposed in the literature to overcome these challenges with the help of Sign Language Recognition (SLR) using methods based on arm sensors, data glove and computer vision. However, the proposed sensor-based methods require users to wear additional devices such as arm bands and data-glove. The sensor-free vision-based methods are computationally intensive and sometimes less accurate as compared to the wearable sensor-based methods. In this paper, we propose a vision-based light weight web-based sign-language interpretation system. It provides two-way communication for all classes of people (deaf-and-mute, hard of hearing, visually impaired, and non-signers) and can be scaled commercially. The proposed method uses Mediapipe to extract hand features from the input image/video and then uses a light weight random forest classifier to classify the signs based on the extracted features with the accuracy of 94.69%. The proposed model is trained on alphabets from American Sign Language. We developed a web-based user interface to remove for ease of deployment. It is equipped with text-to-speech, speech-to-text and auto-correct features to support communication between deaf-and-mute, hard of hearing, visually impaired and non-signers.https://www.fruct.org/publications/fruct31/files/Bis.pdfsign language recognitioncomputer visionmediapiperandom forest classifier
spellingShingle Divyansh Bisht
Manthan Kojage
Manu Shukla
Yash Patil
Priyanka Bagade
Smart Communication System Using Sign Language Interpretation
Proceedings of the XXth Conference of Open Innovations Association FRUCT
sign language recognition
computer vision
mediapipe
random forest classifier
title Smart Communication System Using Sign Language Interpretation
title_full Smart Communication System Using Sign Language Interpretation
title_fullStr Smart Communication System Using Sign Language Interpretation
title_full_unstemmed Smart Communication System Using Sign Language Interpretation
title_short Smart Communication System Using Sign Language Interpretation
title_sort smart communication system using sign language interpretation
topic sign language recognition
computer vision
mediapipe
random forest classifier
url https://www.fruct.org/publications/fruct31/files/Bis.pdf
work_keys_str_mv AT divyanshbisht smartcommunicationsystemusingsignlanguageinterpretation
AT manthankojage smartcommunicationsystemusingsignlanguageinterpretation
AT manushukla smartcommunicationsystemusingsignlanguageinterpretation
AT yashpatil smartcommunicationsystemusingsignlanguageinterpretation
AT priyankabagade smartcommunicationsystemusingsignlanguageinterpretation