CLIP-Driven Prototype Network for Few-Shot Semantic Segmentation
Recent research has shown that visual–text pretrained models perform well in traditional vision tasks. CLIP, as the most influential work, has garnered significant attention from researchers. Thanks to its excellent visual representation capabilities, many recent studies have used CLIP for pixel-lev...
Main Authors: | Shi-Cheng Guo, Shang-Kun Liu, Jing-Yu Wang, Wei-Min Zheng, Cheng-Yu Jiang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-09-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/25/9/1353 |
Similar Items
-
Multi-Similarity Enhancement Network for Few-Shot Segmentation
by: Hao Chen, et al.
Published: (2023-01-01) -
Meta-Seg: A Generalized Meta-Learning Framework for Multi-Class Few-Shot Semantic Segmentation
by: Zhiying Cao, et al.
Published: (2019-01-01) -
Self-Enhanced Mixed Attention Network for Three-Modal Images Few-Shot Semantic Segmentation
by: Kechen Song, et al.
Published: (2023-07-01) -
HSDNet: a poultry farming model based on few-shot semantic segmentation addressing non-smooth and unbalanced convergence
by: Daixian Liu, et al.
Published: (2024-06-01) -
Dual Prototype Learning for Few Shot Semantic Segmentation
by: Wenxuan Li, et al.
Published: (2024-01-01)