Multimodal Emotion Recognition

57 papers with code • 3 benchmarks • 9 datasets

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Libraries

Use these libraries to find Multimodal Emotion Recognition models and implementations

eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos

xuecwu/emotions 29 Nov 2023

The prevailing use of SVs to spread emotions leads to the necessity of emotion recognition in SVs.

21
29 Nov 2023

Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction

leson502/CORECT_EMNLP2023 8 Nov 2023

Emotion recognition is a crucial task for human conversation understanding.

24
08 Nov 2023

A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations

butterfliesss/sdt 31 Oct 2023

Emotion recognition in conversations (ERC), the task of recognizing the emotion of each utterance in a conversation, is crucial for building empathetic machines.

23
31 Oct 2023

Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological Signals

ispamm/mhyeeg 11 Oct 2023

Multimodal emotion recognition from physiological signals is receiving an increasing amount of attention due to the impossibility to control them at will unlike behavioral reactions, thus providing more reliable information.

24
11 Oct 2023

Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Realistic Incomplete Data Scenarios

wooyoohl/noise-robust_mer 21 Sep 2023

Multimodal emotion recognition (MER) in practical scenarios presents a significant challenge due to the presence of incomplete data, such as missing or noisy data.

3
21 Sep 2023

CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition

lijfrank-open/CFN-ESA 28 Jul 2023

RUME is applied to extract conversation-level contextual emotional cues while pulling together data distributions between modalities; ACME is utilized to perform multimodal interaction centered on textual modality; LESM is used to model emotion shift and capture emotion-shift information, thereby guiding the learning of the main task.

1
28 Jul 2023

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

zeroqiaoba/mer2023-baseline 18 Apr 2023

The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.

107
18 Apr 2023

Decoupled Multimodal Distilling for Emotion Recognition

mdswyz/dmd CVPR 2023

Specially, the representation of each modality is decoupled into two parts, i. e., modality-irrelevant/-exclusive spaces, in a self-regression manner.

68
24 Mar 2023

Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations

tmacmai/multimodal-information-bottleneck 31 Oct 2022

To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations.

39
31 Oct 2022

Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities

zhuoyulang/if-mmin 27 Oct 2022

Multimodal emotion recognition leverages complementary information across modalities to gain performance.

22
27 Oct 2022