Video Emotion Recognition
6 papers with code • 2 benchmarks • 5 datasets
Latest papers with no code
Affective Video Content Analysis: Decade Review and New Perspectives
In this study, we comprehensively review the development of AVCA over the past decade, particularly focusing on the most advanced methods adopted to address the three major challenges of video feature extraction, expression subjectivity, and multimodal feature fusion.
Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for Children
In this paper, we propose a novel framework that integrates a fuzzy approach for the recognition of emotions through the analysis of audio and video data.
Versatile Audio-Visual Learning for Handling Single and Multi Modalities in Emotion Regression and Classification Tasks
This study proposes a \emph{versatile audio-visual learning} (VAVL) framework for handling unimodal and multimodal systems for emotion regression and emotion classification tasks.
Representation Learning through Multimodal Attention and Time-Sync Comments for Affective Video Content Analysis
These self-supervised pre-training tasks prompt the fusion module to perform representation learning on segments including TSC, thus capturing more temporal affective patterns.
ICANet: A Method of Short Video Emotion Recognition Driven by Multimodal Data
With the fast development of artificial intelligence and short videos, emotion recognition in short videos has become one of the most important research topics in human-computer interaction.
Multi-modal Residual Perceptron Network for Audio-Video Emotion Recognition
Audio-Video Emotion Recognition is now attacked with Deep Neural Network modeling tools.
Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition
The audio-video based emotion recognition aims to classify a given video into basic emotions.
Audio-video Emotion Recognition in the Wild using Deep Hybrid Networks
This paper presents an audiovisual-based emotion recognition hybrid network.
An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos
Emotion recognition in user-generated videos plays an important role in human-centered computing.
Multimodal Fusion with Deep Neural Networks for Audio-Video Emotion Recognition
This paper presents a novel deep neural network (DNN) for multimodal fusion of audio, video and text modalities for emotion recognition.