Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System

With the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses...

Full description

Bibliographic Details
Main Authors: Yoonseok Heo, Sangwoo Kang, Jungyun Seo
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/18/7875
_version_ 1827723574181363712
author Yoonseok Heo
Sangwoo Kang
Jungyun Seo
author_facet Yoonseok Heo
Sangwoo Kang
Jungyun Seo
author_sort Yoonseok Heo
collection DOAJ
description With the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses an audio-visual scene-aware dialog system that can communicate with users about audio-visual scenes. It is essential to understand not only visual and textual information but also audio information in a comprehensive way. Despite the substantial progress in multimodal representation learning with language and visual modalities, there are still two caveats: ineffective use of auditory information and the lack of interpretability of the deep learning systems’ reasoning. To address these issues, we propose a novel audio-visual scene-aware dialog system that utilizes a set of explicit information from each modality as a form of natural language, which can be fused into a language model in a natural way. It leverages a transformer-based decoder to generate a coherent and correct response based on multimodal knowledge in a multitask learning setting. In addition, we also address the way of interpreting the model with a response-driven temporal moment localization method to verify how the system generates the response. The system itself provides the user with the evidence referred to in the system response process as a form of the timestamp of the scene. We show the superiority of the proposed model in all quantitative and qualitative measurements compared to the baseline. In particular, the proposed model achieved robust performance even in environments using all three modalities, including audio. We also conducted extensive experiments to investigate the proposed model. In addition, we obtained state-of-the-art performance in the system response reasoning task.
first_indexed 2024-03-10T22:02:38Z
format Article
id doaj.art-b87cb19998c944e79081344f9be209d8
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T22:02:38Z
publishDate 2023-09-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-b87cb19998c944e79081344f9be209d82023-11-19T12:55:27ZengMDPI AGSensors1424-82202023-09-012318787510.3390/s23187875Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog SystemYoonseok Heo0Sangwoo Kang1Jungyun Seo2Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of KoreaSchool of Computing, Gachon University, Seongnam 13120, Republic of KoreaDepartment of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of KoreaWith the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses an audio-visual scene-aware dialog system that can communicate with users about audio-visual scenes. It is essential to understand not only visual and textual information but also audio information in a comprehensive way. Despite the substantial progress in multimodal representation learning with language and visual modalities, there are still two caveats: ineffective use of auditory information and the lack of interpretability of the deep learning systems’ reasoning. To address these issues, we propose a novel audio-visual scene-aware dialog system that utilizes a set of explicit information from each modality as a form of natural language, which can be fused into a language model in a natural way. It leverages a transformer-based decoder to generate a coherent and correct response based on multimodal knowledge in a multitask learning setting. In addition, we also address the way of interpreting the model with a response-driven temporal moment localization method to verify how the system generates the response. The system itself provides the user with the evidence referred to in the system response process as a form of the timestamp of the scene. We show the superiority of the proposed model in all quantitative and qualitative measurements compared to the baseline. In particular, the proposed model achieved robust performance even in environments using all three modalities, including audio. We also conducted extensive experiments to investigate the proposed model. In addition, we obtained state-of-the-art performance in the system response reasoning task.https://www.mdpi.com/1424-8220/23/18/7875multimodal deep learningaudio-visual scene-aware dialog systemevent keyword driven multimodal representation learning
spellingShingle Yoonseok Heo
Sangwoo Kang
Jungyun Seo
Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
Sensors
multimodal deep learning
audio-visual scene-aware dialog system
event keyword driven multimodal representation learning
title Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_full Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_fullStr Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_full_unstemmed Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_short Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_sort natural language driven multimodal representation learning for audio visual scene aware dialog system
topic multimodal deep learning
audio-visual scene-aware dialog system
event keyword driven multimodal representation learning
url https://www.mdpi.com/1424-8220/23/18/7875
work_keys_str_mv AT yoonseokheo naturallanguagedrivenmultimodalrepresentationlearningforaudiovisualsceneawaredialogsystem
AT sangwookang naturallanguagedrivenmultimodalrepresentationlearningforaudiovisualsceneawaredialogsystem
AT jungyunseo naturallanguagedrivenmultimodalrepresentationlearningforaudiovisualsceneawaredialogsystem