CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios
This paper focuses on the challenge of answering questions in scenarios that are composed of rich and complex dynamic audiovisual components. Although existing Multimodal Large Language Models (MLLMs) can respond to audio-visual content, these responses are sometimes ambiguous and fail to describe s...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2024
|
_version_ | 1811139583103991808 |
---|---|
author | Ye, Q Yu, Z Shao, R Xie, X Torr, P Cao, X |
author_facet | Ye, Q Yu, Z Shao, R Xie, X Torr, P Cao, X |
author_sort | Ye, Q |
collection | OXFORD |
description | This paper focuses on the challenge of answering questions
in scenarios that are composed of rich and complex dynamic audiovisual components. Although existing Multimodal Large Language Models (MLLMs) can respond to audio-visual content, these responses are
sometimes ambiguous and fail to describe specific audio-visual events. To
overcome this limitation, we introduce the CAT, which enhances MLLM
in three ways: 1) besides straightforwardly bridging audio and video, we
design a clue aggregator that aggregates question-related clues in dynamic audio-visual scenarios to enrich the detailed knowledge required
for large language models. 2) CAT is trained on a mixed multimodal
dataset, allowing direct application in audio-visual scenarios. Notably,
we collect an audio-visual joint instruction dataset named AVinstruct,
to further enhance the capacity of CAT to model cross-semantic correlations. 3) we propose AI-assisted ambiguity-aware direct preference
optimization, a strategy specialized in retraining the model to favor the
non-ambiguity response and improve the ability to localize specific audiovisual objects. Extensive experimental results demonstrate that CAT
outperforms existing methods on multimodal tasks, especially in AudioVisual Question Answering (AVQA) tasks. The codes and the collected
instructions are released at https://github.com/rikeilong/Bay-CAT |
first_indexed | 2024-09-25T04:08:23Z |
format | Conference item |
id | oxford-uuid:0fd705eb-7023-420e-9834-4258746fb9d2 |
institution | University of Oxford |
language | English |
last_indexed | 2024-09-25T04:08:23Z |
publishDate | 2024 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:0fd705eb-7023-420e-9834-4258746fb9d22024-06-13T11:47:59ZCAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenariosConference itemhttp://purl.org/coar/resource_type/c_5794uuid:0fd705eb-7023-420e-9834-4258746fb9d2EnglishSymplectic ElementsIEEE2024Ye, QYu, ZShao, RXie, XTorr, PCao, XThis paper focuses on the challenge of answering questions in scenarios that are composed of rich and complex dynamic audiovisual components. Although existing Multimodal Large Language Models (MLLMs) can respond to audio-visual content, these responses are sometimes ambiguous and fail to describe specific audio-visual events. To overcome this limitation, we introduce the CAT, which enhances MLLM in three ways: 1) besides straightforwardly bridging audio and video, we design a clue aggregator that aggregates question-related clues in dynamic audio-visual scenarios to enrich the detailed knowledge required for large language models. 2) CAT is trained on a mixed multimodal dataset, allowing direct application in audio-visual scenarios. Notably, we collect an audio-visual joint instruction dataset named AVinstruct, to further enhance the capacity of CAT to model cross-semantic correlations. 3) we propose AI-assisted ambiguity-aware direct preference optimization, a strategy specialized in retraining the model to favor the non-ambiguity response and improve the ability to localize specific audiovisual objects. Extensive experimental results demonstrate that CAT outperforms existing methods on multimodal tasks, especially in AudioVisual Question Answering (AVQA) tasks. The codes and the collected instructions are released at https://github.com/rikeilong/Bay-CAT |
spellingShingle | Ye, Q Yu, Z Shao, R Xie, X Torr, P Cao, X CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios |
title | CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios |
title_full | CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios |
title_fullStr | CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios |
title_full_unstemmed | CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios |
title_short | CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios |
title_sort | cat enhancing multimodal large language model to answer questions in dynamic audio visual scenarios |
work_keys_str_mv | AT yeq catenhancingmultimodallargelanguagemodeltoanswerquestionsindynamicaudiovisualscenarios AT yuz catenhancingmultimodallargelanguagemodeltoanswerquestionsindynamicaudiovisualscenarios AT shaor catenhancingmultimodallargelanguagemodeltoanswerquestionsindynamicaudiovisualscenarios AT xiex catenhancingmultimodallargelanguagemodeltoanswerquestionsindynamicaudiovisualscenarios AT torrp catenhancingmultimodallargelanguagemodeltoanswerquestionsindynamicaudiovisualscenarios AT caox catenhancingmultimodallargelanguagemodeltoanswerquestionsindynamicaudiovisualscenarios |