Multimodal Emotion Recognition

57 papers with code • 3 benchmarks • 9 datasets

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Libraries

Use these libraries to find Multimodal Emotion Recognition models and implementations

Latest papers with no code

Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition

no code yet • 16 Nov 2023

Decades of research indicate that emotion recognition is more effective when drawing information from multiple modalities.

A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning

no code yet • 24 Oct 2023

In this work, we present a novel paradigm for contextualized Emotion Recognition using Graph Convolutional Network with Reinforcement Learning (conER-GRL).

Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 2023

no code yet • 11 Sep 2023

Three different structures based on attention-guided feature gathering (AFG) are designed for deep feature fusion.

Leveraging Label Information for Multimodal Emotion Recognition

no code yet • 5 Sep 2023

Finally, we devise a novel label-guided attentive fusion module to fuse the label-aware text and speech representations for emotion classification.

A Unified Transformer-based Network for multimodal Emotion Recognition

no code yet • 27 Aug 2023

We then present our UBVMT network which is trained to perform emotion recognition by combining the 2D image-based representation of the ECG/PPG signal and the facial expression features.

Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition

no code yet • 8 Aug 2023

On the other hand, during the feature fusion stage, we propose a Contribution-aware Fusion Mechanism (CFM) and a Context Refusion Mechanism (CRM) for multimodal and context integration, respectively.

Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning

no code yet • 12 Jul 2023

Emotion recognition is an important research direction in artificial intelligence, helping machines understand and adapt to human emotional states.

TACOformer:Token-channel compounded Cross Attention for Multimodal Emotion Recognition

no code yet • 23 Jun 2023

Recently, emotion recognition based on physiological signals has emerged as a field with intensive research.

A Comparison of Time-based Models for Multimodal Emotion Recognition

no code yet • 22 Jun 2023

In this study, the performance of different sequence models in multi-modal emotion recognition was compared.

EMERSK -- Explainable Multimodal Emotion Recognition with Situational Knowledge

no code yet • 14 Jun 2023

One of the primary challenges in emotion recognition is effectively utilizing the various cues (modalities) available in the data.