Rethinking visual prompting for multimodal large language models with external knowledge

In recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in tex...

Бүрэн тодорхойлолт

Номзүйн дэлгэрэнгүй
Үндсэн зохиолчид: Lin, Y, Li, Y, Chen, D, Xu, W, Clark, R, Torr, P, Yuan, L
Формат: Internet publication
Хэл сонгох:English
Хэвлэсэн: 2024
_version_ 1826314868810055680
author Lin, Y
Li, Y
Chen, D
Xu, W
Clark, R
Torr, P
Yuan, L
author_facet Lin, Y
Li, Y
Chen, D
Xu, W
Clark, R
Torr, P
Yuan, L
author_sort Lin, Y
collection OXFORD
description In recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs, limiting their ability to answer questions requiring an understanding of detailed or localized visual elements. Drawing inspiration from the Retrieval-Augmented Generation (RAG) concept, this paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models (e.g., instance segmentation/OCR models), into MLLMs. This is a promising yet underexplored direction for enhancing MLLMs' performance. Our approach diverges from concurrent works, which transform external knowledge into additional text prompts, necessitating the model to indirectly learn the correspondence between visual content and text coordinates. Instead, we propose embedding fine-grained knowledge information directly into a spatial embedding map as a visual prompt. This design can be effortlessly incorporated into various MLLMs, such as LLaVA and Mipha, considerably improving their visual understanding performance. Through rigorous experiments, we demonstrate that our method can enhance MLLM performance across nine benchmarks, amplifying their fine-grained context-aware capabilities.
first_indexed 2024-12-09T03:12:13Z
format Internet publication
id oxford-uuid:b542f3db-af9e-4f17-b97a-5021582c5368
institution University of Oxford
language English
last_indexed 2024-12-09T03:12:13Z
publishDate 2024
record_format dspace
spelling oxford-uuid:b542f3db-af9e-4f17-b97a-5021582c53682024-10-15T10:04:30ZRethinking visual prompting for multimodal large language models with external knowledgeInternet publicationhttp://purl.org/coar/resource_type/c_7ad9uuid:b542f3db-af9e-4f17-b97a-5021582c5368EnglishSymplectic Elements2024Lin, YLi, YChen, DXu, WClark, RTorr, PYuan, LIn recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs, limiting their ability to answer questions requiring an understanding of detailed or localized visual elements. Drawing inspiration from the Retrieval-Augmented Generation (RAG) concept, this paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models (e.g., instance segmentation/OCR models), into MLLMs. This is a promising yet underexplored direction for enhancing MLLMs' performance. Our approach diverges from concurrent works, which transform external knowledge into additional text prompts, necessitating the model to indirectly learn the correspondence between visual content and text coordinates. Instead, we propose embedding fine-grained knowledge information directly into a spatial embedding map as a visual prompt. This design can be effortlessly incorporated into various MLLMs, such as LLaVA and Mipha, considerably improving their visual understanding performance. Through rigorous experiments, we demonstrate that our method can enhance MLLM performance across nine benchmarks, amplifying their fine-grained context-aware capabilities.
spellingShingle Lin, Y
Li, Y
Chen, D
Xu, W
Clark, R
Torr, P
Yuan, L
Rethinking visual prompting for multimodal large language models with external knowledge
title Rethinking visual prompting for multimodal large language models with external knowledge
title_full Rethinking visual prompting for multimodal large language models with external knowledge
title_fullStr Rethinking visual prompting for multimodal large language models with external knowledge
title_full_unstemmed Rethinking visual prompting for multimodal large language models with external knowledge
title_short Rethinking visual prompting for multimodal large language models with external knowledge
title_sort rethinking visual prompting for multimodal large language models with external knowledge
work_keys_str_mv AT liny rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge
AT liy rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge
AT chend rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge
AT xuw rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge
AT clarkr rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge
AT torrp rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge
AT yuanl rethinkingvisualpromptingformultimodallargelanguagemodelswithexternalknowledge