Multimodal Emotion Recognition

52 papers with code • 3 benchmarks • 9 datasets

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Libraries

Use these libraries to find Multimodal Emotion Recognition models and implementations

Most implemented papers

Multimodal Speech Emotion Recognition and Ambiguity Resolution

Demfier/multimodal-speech-emotion-recognition 12 Apr 2019

In this work, we adopt a feature-engineering based approach to tackle the task of speech emotion recognition.

Multimodal Speech Emotion Recognition Using Audio and Text

david-yoon/multimodal-speech-emotion 10 Oct 2018

Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.

Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment Analysis

robertjkeck2/EmoTe 17 Apr 2019

Therefore, in this paper, based on audio and text, we consider the task of multimodal sentiment analysis and propose a novel fusion strategy including both multi-feature fusion and multi-modality fusion to improve the accuracy of audio-text sentiment analysis.

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

zeroqiaoba/mer2023-baseline 18 Apr 2023

The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.

End-to-End Multimodal Emotion Recognition using Deep Neural Networks

tzirakis/Multimodal-Emotion-Recognition 27 Apr 2017

The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

Context-Dependent Sentiment Analysis in User-Generated Videos

senticnet/sc-lstm ACL 2017

Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos.

Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning

Samarth-Tripathi/IEMOCAP-Emotion-Detection 16 Apr 2018

Emotion recognition has become an important field of research in Human Computer Interactions as we improve upon the techniques for modelling the various aspects of behaviour.

DialogueRNN: An Attentive RNN for Emotion Detection in Conversations

SenticNet/conv-emotion 1 Nov 2018

Emotion detection in conversations is a necessary step for a number of applications, including opinion mining over chat history, social media threads, debates, argumentation mining, understanding consumer feedback in live conversations, etc.

Emotion Recognition in Audio and Video Using Deep Neural Networks

julieeF/CS231N-Project 15 Jun 2020

Humans are able to comprehend information from multiple domains for e. g. speech, text and visual.

COGMEN: COntextualized GNN based Multimodal Emotion recognitioN

exploration-lab/cogmen NAACL 2022

Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions.