Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
From an enormous amount of image-text pairs, large-scale vision-language models (VLMs) learn to implicitly associate image regions with words, which is vital for tasks such as image captioning and visual question answering. However, leveraging such pre-trained models for open-vocabulary semantic s...
Main Author: | Luo, Jiayun |
---|---|
Other Authors: | Li Boyang |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175765 |
Similar Items
-
Privacy-Preserving Semantic Segmentation Using Vision Transformer
by: Hitoshi Kiya, et al.
Published: (2022-08-01) -
ClearCLIP: decomposing CLIP representations for dense vision-language inference
by: Lan, Mengcheng, et al.
Published: (2024) -
Functional and Semantic Dominants of the Digital Transmedia Language in the Context of Topical Problems of Intercultural Communication
by: Laila Paracchini, et al.
Published: (2023-10-01) -
End-to-end semi-supervised deep learning model for surface crack detection of infrastructures
by: Mohammed Ameen Mohammed, et al.
Published: (2022-12-01) -
The Impact of Semantic Clustering on Iranian EFL Advanced Learners’ Vocabulary Retention
by: Sanam Savojbolaghchilar, et al.
Published: (2017-10-01)