Sparse in space and time: audio-visual synchronisation with trainable selectors
<p>The objective of this paper is audio-visual synchronisation of general videos ‘in the wild’. For such videos, the events that may be harnessed for synchronisation cues may be spatially small and may occur only infrequently during a many seconds-long video clip, i.e. the...
Main Authors: | Iashin, V, Xie, W, Rahtu, E, Zisserman, A |
---|---|
Format: | Conference item |
Language: | English |
Published: |
British Machine Vision Association
2022
|
Similar Items
-
Audio-visual synchronisation in the wild
by: Chen, H, et al.
Published: (2021) -
Synchformer: efficient synchronization from sparse cues
by: Iashin, V, et al.
Published: (2024) -
The Lottery Ticket Hypothesis: On Sparse, Trainable Neural Networks
by: Frankle, Jonathan
Published: (2023) -
The lottery ticket hypothesis: Finding sparse, trainable neural networks
by: Frankle, Jonathan, et al.
Published: (2021) -
Tiny Trainable Instruments
by: Montoya-Moraga, Aarón
Published: (2022)