Language-aware vision transformer for referring segmentation
Referring segmentation is a fundamental vision-language task that aims to segment out an object from an image or video in accordance with a natural language description. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image...
Главные авторы: | Yang, Z, Wang, J, Ye, X, Tang, Y, Chen, K, Zhao, H, Torr, PHS |
---|---|
Формат: | Journal article |
Язык: | English |
Опубликовано: |
IEEE
2024
|
Схожие документы
-
LAVT: Language-Aware Vision Transformer for referring image segmentation
по: Yang, Z, и др.
Опубликовано: (2022) -
Semantics-aware dynamic localization and refinement for referring image segmentation
по: Yang, Z, и др.
Опубликовано: (2023) -
Vision transformers: from semantic segmentation to dense prediction
по: Zhang, L, и др.
Опубликовано: (2024) -
Hierarchical interaction network for video object segmentation from referring expressions
по: Yang, Z, и др.
Опубликовано: (2021) -
Behind every domain there is a shift: adapting distortion-aware vision transformers for panoramic semantic segmentation
по: Zhang, J, и др.
Опубликовано: (2024)