Emotion Recognition

456 papers with code • 7 benchmarks • 45 datasets

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Resolve Domain Conflicts for Generalizable Remote Physiological Measurement

faceonlive/ai-research 11 Apr 2024

Remote photoplethysmography (rPPG) technology has become increasingly popular due to its non-invasive monitoring of various physiological indicators, making it widely applicable in multimedia interaction, healthcare, and emotion analysis.

131
11 Apr 2024

What is Learnt by the LEArnable Front-end (LEAF)? Adapting Per-Channel Energy Normalisation (PCEN) to Noisy Conditions

hanyu-meng/adapting-leaf 10 Apr 2024

There is increasing interest in the use of the LEArnable Front-end (LEAF) in a variety of speech processing systems.

4
10 Apr 2024

nEMO: Dataset of Emotional Speech in Polish

faceonlive/ai-research 9 Apr 2024

Speech emotion recognition has become increasingly important in recent years due to its potential applications in healthcare, customer service, and personalization of dialogue systems.

131
09 Apr 2024

Affective-NLI: Towards Accurate and Interpretable Personality Recognition in Conversation

preke/affective-nli 3 Apr 2024

To utilize affectivity within dialog content for accurate personality recognition, we fine-tuned a pre-trained language model specifically for emotion recognition in conversations, facilitating real-time affective annotations for utterances.

2
03 Apr 2024

MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models

mips-colt/mer-mce 31 Mar 2024

This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.

6
31 Mar 2024

Heterogeneity over Homogeneity: Investigating Multilingual Speech Pre-Trained Models for Detecting Audio Deepfake

orchidchetiaphukan/multilingualptm_add_naacl24 31 Mar 2024

To validate our hypothesis, we extract representations from state-of-the-art (SOTA) PTMs including monolingual, multilingual as well as PTMs trained for speaker and emotion recognition, and evaluated them on ASVSpoof 2019 (ASV), In-the-Wild (ITW), and DECRO benchmark databases.

1
31 Mar 2024

Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in Conversation

yu-fangxu/eacl 29 Mar 2024

To achieve this, we utilize label encodings as anchors to guide the learning of utterance representations and design an auxiliary loss to ensure the effective separation of anchors for similar emotions.

5
29 Mar 2024

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

WYT8506/MultimodalCertification 28 Mar 2024

Moreover, we compare our MMCert with a state-of-the-art certified defense extended from unimodal models.

0
28 Mar 2024

Colour and Brush Stroke Pattern Recognition in Abstract Art using Modified Deep Convolutional Generative Adversarial Networks

Deceptrax123/Pattern-Recognition-in-Abstract-art- 27 Mar 2024

Further this paper explores the generated latent space by performing random walks to understand vector relationships between brush strokes and colours in the abstract art space and a statistical analysis of unstable outputs after a certain period of GAN training and compare its significant difference.

3
27 Mar 2024

Unlocking the Emotional States of High-Risk Suicide Callers through Speech Analysis

alaaNfissi/Unlocking-the-Emotional-States-of-High-Risk-Suicide-Callers-through-Speech-Analysis IEEE 18th International Conference on Semantic Computing (ICSC) 2024

In light of these challenges, we present a novel end-to-end (E2E) method for speech emotion recognition (SER) as a mean of detecting changes in emotional state, that may indicate a high risk of suicide.

1
22 Mar 2024