Multimodal Emotion Recognition
57 papers with code • 3 benchmarks • 9 datasets
This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual
Please include the modality in the bracket after the model name.
All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.
Libraries
Use these libraries to find Multimodal Emotion Recognition models and implementationsLatest papers with no code
Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition
Decades of research indicate that emotion recognition is more effective when drawing information from multiple modalities.
A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning
In this work, we present a novel paradigm for contextualized Emotion Recognition using Graph Convolutional Network with Reinforcement Learning (conER-GRL).
Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 2023
Three different structures based on attention-guided feature gathering (AFG) are designed for deep feature fusion.
Leveraging Label Information for Multimodal Emotion Recognition
Finally, we devise a novel label-guided attentive fusion module to fuse the label-aware text and speech representations for emotion classification.
A Unified Transformer-based Network for multimodal Emotion Recognition
We then present our UBVMT network which is trained to perform emotion recognition by combining the 2D image-based representation of the ECG/PPG signal and the facial expression features.
Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition
On the other hand, during the feature fusion stage, we propose a Contribution-aware Fusion Mechanism (CFM) and a Context Refusion Mechanism (CRM) for multimodal and context integration, respectively.
Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning
Emotion recognition is an important research direction in artificial intelligence, helping machines understand and adapt to human emotional states.
TACOformer:Token-channel compounded Cross Attention for Multimodal Emotion Recognition
Recently, emotion recognition based on physiological signals has emerged as a field with intensive research.
A Comparison of Time-based Models for Multimodal Emotion Recognition
In this study, the performance of different sequence models in multi-modal emotion recognition was compared.
EMERSK -- Explainable Multimodal Emotion Recognition with Situational Knowledge
One of the primary challenges in emotion recognition is effectively utilizing the various cues (modalities) available in the data.