Separating the “chirp” from the “chat”: self-supervised visual grounding of sound and language

We present DenseAV, a novel dual encoder grounding architecture that learns high-resolution, semantically meaningful, and audio-visual aligned features solely through watching videos. We show that DenseAV can discover the “meaning” of words and the “location” of sounds without explicit localization...

詳細記述

書誌詳細
主要な著者: Hamilton, M, Zisserman, A, Hershey, JR, Freeman, WT
フォーマット: Conference item
言語:English
出版事項: IEEE 2024