Emotion Recognition

458 papers with code • 7 benchmarks • 45 datasets

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Most implemented papers

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

declare-lab/MELD ACL 2019

We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations.

Multimodal Speech Emotion Recognition and Ambiguity Resolution

Demfier/multimodal-speech-emotion-recognition 12 Apr 2019

In this work, we adopt a feature-engineering based approach to tackle the task of speech emotion recognition.

Multimodal Speech Emotion Recognition Using Audio and Text

david-yoon/multimodal-speech-emotion 10 Oct 2018

Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.

Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors

victorywys/RAVEN 23 Nov 2018

Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication.

Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts

NUSTM/ECPE ACL 2019

Emotion cause extraction (ECE), the task aimed at extracting the potential causes behind certain emotions in text, has gained much attention in recent years due to its wide applications.

DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition

shenwzh3/DialogXL 16 Dec 2020

Specifically, we first modify the recurrence mechanism of XLNet from segment-level to utterance-level in order to better model the conversational data.

Training Deep Neural Networks on Noisy Labels with Bootstrapping

vfdev-5/BootstrappingLoss 20 Dec 2014

On MNIST handwritten digits, we show that our model is robust to label corruption.

DeXpression: Deep Convolutional Neural Network for Expression Recognition

MaxLikesMath/DeepLearningImplementations 17 Sep 2015

The proposed architecture achieves 99. 6% for CKP and 98. 63% for MMI, therefore performing better than the state of the art using CNNs.

Efficient Low-rank Multimodal Fusion with Modality-Specific Factors

Justin1904/Low-rank-Multimodal-Fusion ACL 2018

Previous research in this field has exploited the expressiveness of tensors for multimodal representation.

Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment Analysis

robertjkeck2/EmoTe 17 Apr 2019

Therefore, in this paper, based on audio and text, we consider the task of multimodal sentiment analysis and propose a novel fusion strategy including both multi-feature fusion and multi-modality fusion to improve the accuracy of audio-text sentiment analysis.