VoxCeleb2: Deep Speaker Recognition

14 Jun 2018  ·  Joon Son Chung, Arsha Nagrani, Andrew Zisserman ·

The objective of this paper is speaker recognition under noisy and unconstrained conditions. We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset. Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin.

PDF Abstract

Datasets


Introduced in the Paper:

VoxCeleb2

Used in the Paper:

VGGFace2

Results from the Paper


 Ranked #1 on Speaker Verification on VoxCeleb2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speaker Verification VoxCeleb2 ResNet-50 EER 100 # 1

Methods


No methods listed for this paper. Add relevant methods here