Language-aware vision transformer for referring segmentation
Referring segmentation is a fundamental vision-language task that aims to segment out an object from an image or video in accordance with a natural language description. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image...
主要な著者: | Yang, Z, Wang, J, Ye, X, Tang, Y, Chen, K, Zhao, H, Torr, PHS |
---|---|
フォーマット: | Journal article |
言語: | English |
出版事項: |
IEEE
2024
|
類似資料
-
LAVT: Language-Aware Vision Transformer for referring image segmentation
著者:: Yang, Z, 等
出版事項: (2022) -
Semantics-aware dynamic localization and refinement for referring image segmentation
著者:: Yang, Z, 等
出版事項: (2023) -
Vision transformers: from semantic segmentation to dense prediction
著者:: Zhang, L, 等
出版事項: (2024) -
Hierarchical interaction network for video object segmentation from referring expressions
著者:: Yang, Z, 等
出版事項: (2021) -
Behind every domain there is a shift: adapting distortion-aware vision transformers for panoramic semantic segmentation
著者:: Zhang, J, 等
出版事項: (2024)