Emotion Recognition
465 papers with code • 7 benchmarks • 45 datasets
Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition
Libraries
Use these libraries to find Emotion Recognition models and implementationsDatasets
Subtasks
Most implemented papers
EmoTxt: A Toolkit for Emotion Recognition from Text
We provide empirical evidence of the performance of EmoTxt.
Multi-attention Recurrent Network for Human Communication Comprehension
AI must understand each modality and the interactions between them that shape human communication.
Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning
Emotion recognition has become an important field of research in Human Computer Interactions as we improve upon the techniques for modelling the various aspects of behaviour.
Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features
The integration of information across multiple modalities and across time is a promising way to enhance the emotion recognition performance of affective systems.
Classifying and Visualizing Emotions with Emotional DAN
Classification of human emotions remains an important and challenging task for many computer vision algorithms, especially in the era of humanoid robots which coexist with humans in their everyday life.
Where is Your Evidence: Improving Fact-checking by Justification Modeling
Fact-checking is a journalistic practice that compares a claim made publicly against trusted sources of facts.
Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond
Our approach only needs to modify the input image and can work with any network to improve its performance.
A Compact Embedding for Facial Expression Similarity
Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection.
Speech Emotion Recognition Using Multi-hop Attention Mechanism
As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data.
Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks
Recent studies demonstrated that the average heart rate (HR) can be measured from facial videos based on non-contact remote photoplethysmography (rPPG).