Audio-visual synchronisation in the wild

In this paper, we consider the problem of audio-visual synchronisation applied to videos "in-the-wild" (i.e. of general classes beyond speech). As a new task, we identify and curate a test set with high audio-visual correlation, namely VGG-Sound Sync. We compare a number of transformer-bas...

Full description

Bibliographic Details
Main Authors: Chen, H, Xie, W, Afouras, T, Nagrani, A, Vedaldi, A, Zisserman, A
Format: Conference item
Language:English
Published: British Machine Vision Association 2021
Description
Summary:In this paper, we consider the problem of audio-visual synchronisation applied to videos "in-the-wild" (i.e. of general classes beyond speech). As a new task, we identify and curate a test set with high audio-visual correlation, namely VGG-Sound Sync. We compare a number of transformer-based architectural variants specifically designed to model audio and visual signals of arbitrary length, while significantly reducing memory requirements during training. We further conduct an in-depth analysis on the curated dataset and define an evaluation metric for open domain audio-visual synchronisation. We apply our method on standard lip reading speech benchmarks, LRS2 and LRS3, with ablations on various aspects. Finally, we set the first benchmark for general audio-visual synchronisation with over 160 diverse classes in the new VGG-Sound Sync video dataset. In all cases, our proposed model outperforms the previous state-of-the-art by a significant margin.