uCAP: an unsupervised prompting method for vision-language models
This paper addresses a significant limitation that prevents Contrastive Language-Image Pretrained Models (CLIP) from achieving optimal performance on downstream image classification tasks. The key problem with CLIP-style zero-shot classification is that it requires domain-specific context in the for...
Những tác giả chính: | , , , , , , , |
---|---|
Định dạng: | Conference item |
Ngôn ngữ: | English |
Được phát hành: |
Springer
2024
|