Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge

19 May 2020  ·  Benjamin van Niekerk, Leanne Nortje, Herman Kamper ·

In this paper, we explore vector quantization for acoustic unit discovery. Leveraging unlabelled data, we aim to learn discrete representations of speech that separate phonetic content from speaker-specific details. We propose two neural models to tackle this challenge - both use vector quantization to map continuous features to a finite set of codes. The first model is a type of vector-quantized variational autoencoder (VQ-VAE). The VQ-VAE encodes speech into a sequence of discrete units before reconstructing the audio waveform. Our second model combines vector quantization with contrastive predictive coding (VQ-CPC). The idea is to learn a representation of speech by predicting future acoustic units. We evaluate the models on English and Indonesian data for the ZeroSpeech 2020 challenge. In ABX phone discrimination tests, both models outperform all submissions to the 2019 and 2020 challenges, with a relative improvement of more than 30%. The models also perform competitively on a downstream voice conversion task. Of the two, VQ-CPC performs slightly better in general and is simpler and faster to train. Finally, probing experiments show that vector quantization is an effective bottleneck, forcing the models to discard speaker information.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


 Ranked #1 on Voice Conversion on ZeroSpeech 2019 English (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Voice Conversion ZeroSpeech 2019 English VQ-CPC Speaker Similarity 3.8 # 1
Acoustic Unit Discovery ZeroSpeech 2019 English VQ-CPC ABX-across 13.4 # 1
Acoustic Unit Discovery ZeroSpeech 2019 English VQ-VAE ABX-across 14 # 2
Voice Conversion ZeroSpeech 2019 English VQ-VAE Speaker Similarity 3.49 # 2

Methods