Multimodal Speech Emotion Recognition and Ambiguity Resolution

12 Apr 2019  Â·  Gaurav Sahu ·

Identifying emotion from speech is a non-trivial task pertaining to the ambiguous definition of emotion itself. In this work, we adopt a feature-engineering based approach to tackle the task of speech emotion recognition. Formalizing our problem as a multi-class classification problem, we compare the performance of two categories of models. For both, we extract eight hand-crafted features from the audio signal. In the first approach, the extracted features are used to train six traditional machine learning classifiers, whereas the second approach is based on deep learning wherein a baseline feed-forward neural network and an LSTM-based classifier are trained over the same features. In order to resolve ambiguity in communication, we also include features from the text domain. We report accuracy, f-score, precision, and recall for the different experiment settings we evaluated our models in. Overall, we show that lighter machine learning based models trained over a few hand-crafted features are able to achieve performance comparable to the current deep learning based state-of-the-art method for emotion recognition.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Speech Emotion Recognition on IEMOCAP (F1 metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speech Emotion Recognition IEMOCAP Ensemble (Acoustic + Text)(Random Forests + Gradient Boosted Trees + Multi Layer Perceptron + Multinomial Naive Bayes + Logistic Regression) F1 0.718 # 1
UA 0.701 # 4

Methods


No methods listed for this paper. Add relevant methods here