Contrastive Unsupervised Learning for Speech Emotion Recognition

Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. However, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state-of-the-art concordance correlation coefficient (CCC) performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP- Podcast dataset, our method obtained considerable performance improvements compared to baselines.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speech Emotion Recognition MSP-Podcast (Activation) preCPC CCC 0.706 # 2
Speech Emotion Recognition MSP-Podcast (Dominance) preCPC CCC 0.639 # 2
Speech Emotion Recognition MSP-Podcast (Valence) preCPC CCC 0.377 # 2

Methods