Emotion Recognition

464 papers with code • 7 benchmarks • 45 datasets

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Affective-NLI: Towards Accurate and Interpretable Personality Recognition in Conversation

preke/affective-nli 3 Apr 2024

To utilize affectivity within dialog content for accurate personality recognition, we fine-tuned a pre-trained language model specifically for emotion recognition in conversations, facilitating real-time affective annotations for utterances.

2
03 Apr 2024

MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models

mips-colt/mer-mce 31 Mar 2024

This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.

7
31 Mar 2024

Heterogeneity over Homogeneity: Investigating Multilingual Speech Pre-Trained Models for Detecting Audio Deepfake

orchidchetiaphukan/multilingualptm_add_naacl24 31 Mar 2024

To validate our hypothesis, we extract representations from state-of-the-art (SOTA) PTMs including monolingual, multilingual as well as PTMs trained for speaker and emotion recognition, and evaluated them on ASVSpoof 2019 (ASV), In-the-Wild (ITW), and DECRO benchmark databases.

2
31 Mar 2024

Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in Conversation

yu-fangxu/eacl 29 Mar 2024

To achieve this, we utilize label encodings as anchors to guide the learning of utterance representations and design an auxiliary loss to ensure the effective separation of anchors for similar emotions.

5
29 Mar 2024

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

WYT8506/MultimodalCertification 28 Mar 2024

Moreover, we compare our MMCert with a state-of-the-art certified defense extended from unimodal models.

0
28 Mar 2024

Colour and Brush Stroke Pattern Recognition in Abstract Art using Modified Deep Convolutional Generative Adversarial Networks

Deceptrax123/Pattern-Recognition-in-Abstract-art- 27 Mar 2024

Further this paper explores the generated latent space by performing random walks to understand vector relationships between brush strokes and colours in the abstract art space and a statistical analysis of unstable outputs after a certain period of GAN training and compare its significant difference.

3
27 Mar 2024

Unlocking the Emotional States of High-Risk Suicide Callers through Speech Analysis

alaaNfissi/Unlocking-the-Emotional-States-of-High-Risk-Suicide-Callers-through-Speech-Analysis IEEE 18th International Conference on Semantic Computing (ICSC) 2024

In light of these challenges, we present a novel end-to-end (E2E) method for speech emotion recognition (SER) as a mean of detecting changes in emotional state, that may indicate a high risk of suicide.

1
22 Mar 2024

Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition

praveena2j/rjcma 20 Mar 2024

In particular, we compute the attention weights based on cross-correlation between the joint audio-visual-text feature representations and the feature representations of individual modalities to simultaneously capture intra- and intermodal relationships across the modalities.

2
20 Mar 2024

Iterative Feature Boosting for Explainable Speech Emotion Recognition

alaaNfissi/Iterative-Feature-Boosting-for-Explainable-Speech-Emotion-Recognition International Conference on Machine Learning and Applications (ICMLA) 2024

In speech emotion recognition (SER), using pre- defined features without considering their practical importance may lead to high dimensional datasets, including redundant and irrelevant information.

2
19 Mar 2024

Joint Multimodal Transformer for Emotion Recognition in the Wild

PoloWlg/Joint-Multimodal-Transformer-6th-ABAW 15 Mar 2024

Multimodal emotion recognition (MMER) systems typically outperform unimodal systems by leveraging the inter- and intra-modal relationships between, e. g., visual, textual, physiological, and auditory modalities.

2
15 Mar 2024