Disentangled Speech Embeddings Using Cross-Modal Self-Supervision
The objective of this paper is to learn representations of speaker identity without access to manually annotated data. To do so, we develop a self-supervised learning objective that exploits the natural cross-modal synchrony between faces and audio in video. The key idea behind our approach is to te...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2020
|
_version_ | 1797078096412672000 |
---|---|
author | Nagrani, A Chung, JS Albanie, S Zisserman, A |
author_facet | Nagrani, A Chung, JS Albanie, S Zisserman, A |
author_sort | Nagrani, A |
collection | OXFORD |
description | The objective of this paper is to learn representations of speaker identity without access to manually annotated data. To do so, we develop a self-supervised learning objective that exploits the natural cross-modal synchrony between faces and audio in video. The key idea behind our approach is to tease apart—without annotation—the representations of linguistic content and speaker identity. We construct a two-stream architecture which: (1) shares low-level features common to both representations; and (2) provides a natural mechanism for explicitly disentangling these factors, offering the potential for greater generalisation to novel combinations of content and identity and ultimately producing speaker identity representations that are more robust.We train our method on a large-scale audio-visual dataset of talking heads ‘in the wild’, and demonstrate its efficacy by evaluating the learned speaker representations for standard speaker recognition performance. |
first_indexed | 2024-03-07T00:27:35Z |
format | Conference item |
id | oxford-uuid:7ea9a007-6578-44f7-9ce8-9b3197cbeeb8 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T00:27:35Z |
publishDate | 2020 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:7ea9a007-6578-44f7-9ce8-9b3197cbeeb82022-03-26T21:11:33ZDisentangled Speech Embeddings Using Cross-Modal Self-SupervisionConference itemhttp://purl.org/coar/resource_type/c_5794uuid:7ea9a007-6578-44f7-9ce8-9b3197cbeeb8EnglishSymplectic ElementsIEEE2020Nagrani, AChung, JSAlbanie, SZisserman, AThe objective of this paper is to learn representations of speaker identity without access to manually annotated data. To do so, we develop a self-supervised learning objective that exploits the natural cross-modal synchrony between faces and audio in video. The key idea behind our approach is to tease apart—without annotation—the representations of linguistic content and speaker identity. We construct a two-stream architecture which: (1) shares low-level features common to both representations; and (2) provides a natural mechanism for explicitly disentangling these factors, offering the potential for greater generalisation to novel combinations of content and identity and ultimately producing speaker identity representations that are more robust.We train our method on a large-scale audio-visual dataset of talking heads ‘in the wild’, and demonstrate its efficacy by evaluating the learned speaker representations for standard speaker recognition performance. |
spellingShingle | Nagrani, A Chung, JS Albanie, S Zisserman, A Disentangled Speech Embeddings Using Cross-Modal Self-Supervision |
title | Disentangled Speech Embeddings Using Cross-Modal Self-Supervision |
title_full | Disentangled Speech Embeddings Using Cross-Modal Self-Supervision |
title_fullStr | Disentangled Speech Embeddings Using Cross-Modal Self-Supervision |
title_full_unstemmed | Disentangled Speech Embeddings Using Cross-Modal Self-Supervision |
title_short | Disentangled Speech Embeddings Using Cross-Modal Self-Supervision |
title_sort | disentangled speech embeddings using cross modal self supervision |
work_keys_str_mv | AT nagrania disentangledspeechembeddingsusingcrossmodalselfsupervision AT chungjs disentangledspeechembeddingsusingcrossmodalselfsupervision AT albanies disentangledspeechembeddingsusingcrossmodalselfsupervision AT zissermana disentangledspeechembeddingsusingcrossmodalselfsupervision |