A Signer Independent Sign Language Recognition with Co-articulation Elimination from Live Videos: An Indian Scenario

Due to the high population of hearing impaired and vocal disabled people in India, a sign language interpretation system is becoming highly important for minimizing their isolation in society. This paper proposes a signer independent novel vision-based gesture recognition system which is capable of...

Full description

Bibliographic Details
Main Authors: P.K. Athira, C.J. Sruthi, A. Lijiya
Format: Article
Language:English
Published: Elsevier 2022-03-01
Series:Journal of King Saud University: Computer and Information Sciences
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S131915781831228X
Description
Summary:Due to the high population of hearing impaired and vocal disabled people in India, a sign language interpretation system is becoming highly important for minimizing their isolation in society. This paper proposes a signer independent novel vision-based gesture recognition system which is capable of recognizing single handed static and dynamic gestures, double-handed static gestures and finger spelling words of Indian Sign Language (ISL) from live video. The use of Zernike moments for key frame extraction reduces the computation speed to a large extent. It also proposes an improved method for co-articulation elimination in fingerspelling alphabets. The gesture recognition module comprises mainly three steps – Preprocessing, Feature Extraction, and Classification. In the preprocessing phase, the signs are extracted from a real-time video using skin color segmentation. An appropriate feature vector is extracted from the gesture sequence after co-articulation elimination phase. The obtained features are then used for classification using Support Vector Machine(SVM). The system successfully recognized finger spelling alphabets with 91% accuracy and single-handed dynamic words with 89% accuracy. The experimental results show that the system has a better recognition rate compared to some of the existing methods.
ISSN:1319-1578