Separating the “chirp” from the “chat”: self-supervised visual grounding of sound and language
We present DenseAV, a novel dual encoder grounding architecture that learns high-resolution, semantically meaningful, and audio-visual aligned features solely through watching videos. We show that DenseAV can discover the “meaning” of words and the “location” of sounds without explicit localization...
Asıl Yazarlar: | Hamilton, M, Zisserman, A, Hershey, JR, Freeman, WT |
---|---|
Materyal Türü: | Conference item |
Dil: | English |
Baskı/Yayın Bilgisi: |
IEEE
2024
|
Benzer Materyaller
-
Multi-task self-supervised visual learning
Yazar:: Doersch, C, ve diğerleri
Baskı/Yayın Bilgisi: (2017) -
Ambient Sound Provides Supervision for Visual Learning
Yazar:: Owens, Andrew Hale, ve diğerleri
Baskı/Yayın Bilgisi: (2017) -
Learning Sight from Sound: Ambient Sound Provides Supervision for Visual Learning
Yazar:: Owens, Andrew, ve diğerleri
Baskı/Yayın Bilgisi: (2021) -
Self-Supervised Learning for Audio-Visual Relationships of Videos With Stereo Sounds
Yazar:: Tomoya Sato, ve diğerleri
Baskı/Yayın Bilgisi: (2022-01-01) -
Self-supervised learning of audio-visual objects from video
Yazar:: Afouras, T, ve diğerleri
Baskı/Yayın Bilgisi: (2020)