Facial Expression Recognition (FER)
127 papers with code • 24 benchmarks • 29 datasets
Facial Expression Recognition (FER) is a computer vision task aimed at identifying and categorizing emotional expressions depicted on a human face. The goal is to automate the process of determining emotions in real-time, by analyzing the various features of a face such as eyebrows, eyes, mouth, and other features, and mapping them to a set of emotions such as anger, fear, surprise, sadness and happiness.
( Image credit: DeXpression )
Libraries
Use these libraries to find Facial Expression Recognition (FER) models and implementationsSubtasks
Latest papers
EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition
To test this, we evaluate using zero-shot classification of the model trained on sample-level descriptions on four popular dynamic FER datasets.
EmoNeXt: an Adapted ConvNeXt for Facial Emotion Recognition
Facial expressions play a crucial role in human communication serving as a powerful and impactful means to express a wide range of emotions.
A Dual-Direction Attention Mixed Feature Network for Facial Expression Recognition
In recent years, facial expression recognition (FER) has garnered significant attention within the realm of computer vision research.
Latent-OFER: Detect, Mask, and Reconstruct with Latent Vectors for Occluded Facial Expression Recognition
This approach involves three steps: First, the vision transformer (ViT)-based occlusion patch detector masks the occluded position by training only latent vectors from the unoccluded patches using the support vector data description algorithm.
Active Learning with Contrastive Pre-training for Facial Expression Recognition
Even though some prior works have focused on reducing the need for large amounts of labelled data using different unsupervised methods, another promising approach called active learning is barely explored in the context of FER.
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations
With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning.
PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition
In this paper, a lightweight patch and attention network based on MobileNetV1, referred to as PAtt-Lite, is proposed to improve FER performance under challenging conditions.
ReSup: Reliable Label Noise Suppression for Facial Expression Recognition
To further enhance the reliability of our noise decision results, ReSup uses two networks to jointly achieve noise suppression.
A Dual-Branch Adaptive Distribution Fusion Framework for Real-World Facial Expression Recognition
One auxiliary branch is constructed to obtain the label distributions of samples.
ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning
We also employ learnable anchor points in the embedding space with label distributions and multi-head self-attention mechanism to optimize performance against weak predictions with reliability balancing, which is a strategy that leverages anchor points, attention scores, and confidence values to enhance the resilience of label predictions.