Data structures for deductive simulation of HDL conditional operators

Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologie...

Full description

Bibliographic Details
Main Authors: Yevhenii Shovkovyi, Olena Grinyova, Serhii Udovenko, Larysa Chala
Format: Article
Language:English
Published: Kharkiv National University of Radio Electronics 2023-12-01
Series:Сучасний стан наукових досліджень та технологій в промисловості
Subjects:
Online Access:https://itssi-journal.com/index.php/ittsi/article/view/447
_version_ 1797316122627801088
author Yevhenii Shovkovyi
Olena Grinyova
Serhii Udovenko
Larysa Chala
author_facet Yevhenii Shovkovyi
Olena Grinyova
Serhii Udovenko
Larysa Chala
author_sort Yevhenii Shovkovyi
collection DOAJ
description Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologies and legislative initiatives that ensure the rights of people with disabilities and their equal opportunities. This substantiates the relevance of the research of assistive technologies, in the context of software tools, such as the process of social inclusion of people with severe hearing impairment in society. The subject of research is methods of automated sign language translation using intelligent technologies. The purpose of the work is the development and research of sign language automation methods to improve the quality of life of people with hearing impairments in accordance with the "Goals of Sustainable Development of Ukraine" (in the "Reduction of Inequality" part). The main tasks of the research are the development and testing of methods of converting sign language into text, converting text into sign language, as well as automating translation from one sign language to another sign language using modern intelligent technologies. Neural network modeling and 3D animation methods were used to solve these problems. The following results were obtained in the work: the main problems and tasks of social inclusion for people with hearing impairments were identified; a comparative analysis of modern methods and software platforms of automatic sign language translation was carried out; a system combining the SL-to-Text method is proposed and investigated; the Text-to-SL method using 3D animation to generate sign language concepts; the method of generating a 3D-animated gesture from video recordings; method of implementing the Sign Language1 to Sign Language2 technology. For gesture recognition, a convolutional neural network model is used, which is trained using imported and system-generated datasets of video gestures. The trained model has a high recognition accuracy (98.52%). The creation of a 3D model for displaying the gesture on the screen and its processing took place in the Unity 3D environment. The structure of the project, executive and auxiliary files used to build 3D animation for the generation of sign language concepts includes: event handler files; display results according to which they carry information about the position of the tracked points of the body; files that store the characteristics of materials that have been added to certain body mapping points. Conclusions: the proposed methods of automated translation have practical significance, which is confirmed by the demo versions of the software applications "Sign Language to Text" and "Text to Sign Language". A promising direction for continuing research on the topic of the work is the improvement of SL1-to-SL2 methods, the creation of open datasets of video gestures, the joining of scientists and developers to fill dictionaries with concepts of various sign languages.
first_indexed 2024-03-08T03:13:40Z
format Article
id doaj.art-5322684d2319474aa574f1940778bd1f
institution Directory Open Access Journal
issn 2522-9818
2524-2296
language English
last_indexed 2024-03-08T03:13:40Z
publishDate 2023-12-01
publisher Kharkiv National University of Radio Electronics
record_format Article
series Сучасний стан наукових досліджень та технологій в промисловості
spelling doaj.art-5322684d2319474aa574f1940778bd1f2024-02-12T18:02:26ZengKharkiv National University of Radio ElectronicsСучасний стан наукових досліджень та технологій в промисловості2522-98182524-22962023-12-014 (26)Data structures for deductive simulation of HDL conditional operatorsYevhenii Shovkovyi0Olena Grinyova1Serhii Udovenko2Larysa Chala3Kharkiv National University of Radio ElectronicsKharkiv National University of Radio ElectronicsSimon Kuznets Kharkiv National University of EconomicsKharkiv National University of Radio Electronics Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologies and legislative initiatives that ensure the rights of people with disabilities and their equal opportunities. This substantiates the relevance of the research of assistive technologies, in the context of software tools, such as the process of social inclusion of people with severe hearing impairment in society. The subject of research is methods of automated sign language translation using intelligent technologies. The purpose of the work is the development and research of sign language automation methods to improve the quality of life of people with hearing impairments in accordance with the "Goals of Sustainable Development of Ukraine" (in the "Reduction of Inequality" part). The main tasks of the research are the development and testing of methods of converting sign language into text, converting text into sign language, as well as automating translation from one sign language to another sign language using modern intelligent technologies. Neural network modeling and 3D animation methods were used to solve these problems. The following results were obtained in the work: the main problems and tasks of social inclusion for people with hearing impairments were identified; a comparative analysis of modern methods and software platforms of automatic sign language translation was carried out; a system combining the SL-to-Text method is proposed and investigated; the Text-to-SL method using 3D animation to generate sign language concepts; the method of generating a 3D-animated gesture from video recordings; method of implementing the Sign Language1 to Sign Language2 technology. For gesture recognition, a convolutional neural network model is used, which is trained using imported and system-generated datasets of video gestures. The trained model has a high recognition accuracy (98.52%). The creation of a 3D model for displaying the gesture on the screen and its processing took place in the Unity 3D environment. The structure of the project, executive and auxiliary files used to build 3D animation for the generation of sign language concepts includes: event handler files; display results according to which they carry information about the position of the tracked points of the body; files that store the characteristics of materials that have been added to certain body mapping points. Conclusions: the proposed methods of automated translation have practical significance, which is confirmed by the demo versions of the software applications "Sign Language to Text" and "Text to Sign Language". A promising direction for continuing research on the topic of the work is the improvement of SL1-to-SL2 methods, the creation of open datasets of video gestures, the joining of scientists and developers to fill dictionaries with concepts of various sign languages. https://itssi-journal.com/index.php/ittsi/article/view/447automation of sign speech; animated character; body position tracking; people with hearing impairments; sign language; neural networks; gesture recognition; ukrainian sign language; sign language translation; reduce inequality
spellingShingle Yevhenii Shovkovyi
Olena Grinyova
Serhii Udovenko
Larysa Chala
Data structures for deductive simulation of HDL conditional operators
Сучасний стан наукових досліджень та технологій в промисловості
automation of sign speech; animated character; body position tracking; people with hearing impairments; sign language; neural networks; gesture recognition; ukrainian sign language; sign language translation; reduce inequality
title Data structures for deductive simulation of HDL conditional operators
title_full Data structures for deductive simulation of HDL conditional operators
title_fullStr Data structures for deductive simulation of HDL conditional operators
title_full_unstemmed Data structures for deductive simulation of HDL conditional operators
title_short Data structures for deductive simulation of HDL conditional operators
title_sort data structures for deductive simulation of hdl conditional operators
topic automation of sign speech; animated character; body position tracking; people with hearing impairments; sign language; neural networks; gesture recognition; ukrainian sign language; sign language translation; reduce inequality
url https://itssi-journal.com/index.php/ittsi/article/view/447
work_keys_str_mv AT yevheniishovkovyi datastructuresfordeductivesimulationofhdlconditionaloperators
AT olenagrinyova datastructuresfordeductivesimulationofhdlconditionaloperators
AT serhiiudovenko datastructuresfordeductivesimulationofhdlconditionaloperators
AT larysachala datastructuresfordeductivesimulationofhdlconditionaloperators