Open-vocabulary SAM: segment and recognize twenty-thousand classes interactively
The CLIP and Segment Anything Model (SAM) are remarkable vision foundation models (VFMs). SAM excels in segmentation tasks across diverse domains, whereas CLIP is renowned for its zero-shot recognition capabilities. This paper presents an in-depth exploration of integrating these two models into...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Conference Paper |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180250 http://arxiv.org/abs/2401.02955v2 |
Summary: | The CLIP and Segment Anything Model (SAM) are remarkable vision foundation
models (VFMs). SAM excels in segmentation tasks across diverse domains, whereas
CLIP is renowned for its zero-shot recognition capabilities. This paper
presents an in-depth exploration of integrating these two models into a unified
framework. Specifically, we introduce the Open-Vocabulary SAM, a SAM-inspired
model designed for simultaneous interactive segmentation and recognition,
leveraging two unique knowledge transfer modules: SAM2CLIP and CLIP2SAM. The
former adapts SAM's knowledge into the CLIP via distillation and learnable
transformer adapters, while the latter transfers CLIP knowledge into SAM,
enhancing its recognition capabilities. Extensive experiments on various
datasets and detectors show the effectiveness of Open-Vocabulary SAM in both
segmentation and recognition tasks, significantly outperforming the na\"{i}ve
baselines of simply combining SAM and CLIP. Furthermore, aided with image
classification data training, our method can segment and recognize
approximately 22,000 classes. |
---|