uCAP: an unsupervised prompting method for vision-language models

This paper addresses a significant limitation that prevents Contrastive Language-Image Pretrained Models (CLIP) from achieving optimal performance on downstream image classification tasks. The key problem with CLIP-style zero-shot classification is that it requires domain-specific context in the for...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριοι συγγραφείς: Nguyen, AT, Tai, KS, Chen, BC, Shukla, SN, Yu, H, Torr, P, Tian, TP, Lim, SN
Μορφή: Conference item
Γλώσσα:English
Έκδοση: Springer 2024