Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning

16 Apr 2018  ·  Samarth Tripathi, Sarthak Tripathi, Homayoon Beigi ·

Emotion recognition has become an important field of research in Human Computer Interactions as we improve upon the techniques for modelling the various aspects of behaviour. With the advancement of technology our understanding of emotions are advancing, there is a growing need for automatic emotion recognition systems. One of the directions the research is heading is the use of Neural Networks which are adept at estimating complex functions that depend on a large number and diverse source of input data. In this paper we attempt to exploit this effectiveness of Neural networks to enable us to perform multimodal Emotion recognition on IEMOCAP dataset using data from Speech, Text, and Motion capture data from face expressions, rotation and hand movements. Prior research has concentrated on Emotion detection from Speech on the IEMOCAP dataset, but our approach is the first that uses the multiple modes of data offered by IEMOCAP for a more robust and accurate emotion detection.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Emotion Recognition Expressive hands and faces dataset (EHF). SMPLify-X v2v error 52.9 # 1

Methods


No methods listed for this paper. Add relevant methods here