VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning

Complex tasks in the real world involve different modal models, such as visual question answering (VQA). However, traditional multimodal learning requires a large amount of aligned data, such as image text pairs, and constructing a large amount of training data is a challenge for multimodal learning...

Full description

Bibliographic Details
Main Authors: Han Ma, Baoyu Fan, Benjamin K. Ng, Chan-Tong Lam
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/14/3/1169