VoxCeleb2: Deep speaker recognition

<p>The objective of this paper is speaker recognition under noisy and unconstrained conditions.</p> <br/> <p>We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automate...

Full description

Bibliographic Details
Main Authors: Chung, J, Nagrani, A, Zisserman, A
Format: Conference item
Published: International Speech Communication Association 2018
_version_ 1797052106929078272
author Chung, J
Nagrani, A
Zisserman, A
author_facet Chung, J
Nagrani, A
Zisserman, A
author_sort Chung, J
collection OXFORD
description <p>The objective of this paper is speaker recognition under noisy and unconstrained conditions.</p> <br/> <p>We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset.</p> <br/> <p>Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin.</p>
first_indexed 2024-03-06T18:28:04Z
format Conference item
id oxford-uuid:08ab75c5-aa1c-49fc-b36a-1280c6a309c4
institution University of Oxford
last_indexed 2024-03-06T18:28:04Z
publishDate 2018
publisher International Speech Communication Association
record_format dspace
spelling oxford-uuid:08ab75c5-aa1c-49fc-b36a-1280c6a309c42022-03-26T09:14:09ZVoxCeleb2: Deep speaker recognitionConference itemhttp://purl.org/coar/resource_type/c_5794uuid:08ab75c5-aa1c-49fc-b36a-1280c6a309c4Symplectic Elements at OxfordInternational Speech Communication Association2018Chung, JNagrani, AZisserman, A<p>The objective of this paper is speaker recognition under noisy and unconstrained conditions.</p> <br/> <p>We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset.</p> <br/> <p>Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin.</p>
spellingShingle Chung, J
Nagrani, A
Zisserman, A
VoxCeleb2: Deep speaker recognition
title VoxCeleb2: Deep speaker recognition
title_full VoxCeleb2: Deep speaker recognition
title_fullStr VoxCeleb2: Deep speaker recognition
title_full_unstemmed VoxCeleb2: Deep speaker recognition
title_short VoxCeleb2: Deep speaker recognition
title_sort voxceleb2 deep speaker recognition
work_keys_str_mv AT chungj voxceleb2deepspeakerrecognition
AT nagrania voxceleb2deepspeakerrecognition
AT zissermana voxceleb2deepspeakerrecognition