Separating the “chirp” from the “chat”: self-supervised visual grounding of sound and language
We present DenseAV, a novel dual encoder grounding architecture that learns high-resolution, semantically meaningful, and audio-visual aligned features solely through watching videos. We show that DenseAV can discover the “meaning” of words and the “location” of sounds without explicit localization...
主要な著者: | , , , |
---|---|
フォーマット: | Conference item |
言語: | English |
出版事項: |
IEEE
2024
|