Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild

This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recogni...

Full description

Bibliographic Details
Main Authors: Yibo He, Kah Phooi Seng, Li Minn Ang
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/4/1834
_version_ 1827755663813509120
author Yibo He
Kah Phooi Seng
Li Minn Ang
author_facet Yibo He
Kah Phooi Seng
Li Minn Ang
author_sort Yibo He
collection DOAJ
description This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR’s performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2—Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed.
first_indexed 2024-03-11T08:12:11Z
format Article
id doaj.art-c61a2a5a15f24d50be1b6f58a7201463
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-11T08:12:11Z
publishDate 2023-02-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-c61a2a5a15f24d50be1b6f58a72014632023-11-16T23:06:47ZengMDPI AGSensors1424-82202023-02-01234183410.3390/s23041834Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in WildYibo He0Kah Phooi Seng1Li Minn Ang2School of AI and Advanced Computing, Xian Jiaotong Liverpool University, Suzhou 215123, ChinaSchool of AI and Advanced Computing, Xian Jiaotong Liverpool University, Suzhou 215123, ChinaSchool of Science, Technology and Engineering, University of Sunshine Coast, Sippy Downs, QLD 4502, AustraliaThis paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR’s performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2—Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed.https://www.mdpi.com/1424-8220/23/4/1834multimodal sensingaudio-visual speech recognitiondeep learning
spellingShingle Yibo He
Kah Phooi Seng
Li Minn Ang
Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
Sensors
multimodal sensing
audio-visual speech recognition
deep learning
title Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
title_full Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
title_fullStr Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
title_full_unstemmed Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
title_short Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
title_sort multimodal sensor input architecture with deep learning for audio visual speech recognition in wild
topic multimodal sensing
audio-visual speech recognition
deep learning
url https://www.mdpi.com/1424-8220/23/4/1834
work_keys_str_mv AT yibohe multimodalsensorinputarchitecturewithdeeplearningforaudiovisualspeechrecognitioninwild
AT kahphooiseng multimodalsensorinputarchitecturewithdeeplearningforaudiovisualspeechrecognitioninwild
AT liminnang multimodalsensorinputarchitecturewithdeeplearningforaudiovisualspeechrecognitioninwild