Facial Expression Recognition (FER)

127 papers with code • 24 benchmarks • 29 datasets

Facial Expression Recognition (FER) is a computer vision task aimed at identifying and categorizing emotional expressions depicted on a human face. The goal is to automate the process of determining emotions in real-time, by analyzing the various features of a face such as eyebrows, eyes, mouth, and other features, and mapping them to a set of emotions such as anger, fear, surprise, sadness and happiness.

( Image credit: DeXpression )

Libraries

Use these libraries to find Facial Expression Recognition (FER) models and implementations

EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition

nickyfot/emoclip 25 Oct 2023

To test this, we evaluate using zero-shot classification of the model trained on sample-level descriptions on four popular dynamic FER datasets.

30
25 Oct 2023

EmoNeXt: an Adapted ConvNeXt for Facial Emotion Recognition

yelboudouri/EmoNeXt IEEE 25th International Workshop on Multimedia Signal Processing (MMSP) 2023

Facial expressions play a crucial role in human communication serving as a powerful and impactful means to express a wide range of emotions.

23
27 Sep 2023

A Dual-Direction Attention Mixed Feature Network for Facial Expression Recognition

simon20010923/DDAMFN journal 2023

In recent years, facial expression recognition (FER) has garnered significant attention within the realm of computer vision research.

59
25 Aug 2023

Latent-OFER: Detect, Mask, and Reconstruct with Latent Vectors for Occluded Facial Expression Recognition

leeisack/latent-ofer ICCV 2023

This approach involves three steps: First, the vision transformer (ViT)-based occlusion patch detector masks the occluded position by training only latent vectors from the unoccluded patches using the support vector data description algorithm.

11
21 Jul 2023

Active Learning with Contrastive Pre-training for Facial Expression Recognition

shuvenduroy/activefer 6 Jul 2023

Even though some prior works have focused on reducing the need for large amounts of labelled data using different unsupervised methods, another promising approach called active learning is barely explored in the context of FER.

4
06 Jul 2023

A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations

NUSTM/FacialMMT Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2023

With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning.

46
01 Jul 2023

PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition

jlrex/patt-lite 16 Jun 2023

In this paper, a lightweight patch and attention network based on MobileNetV1, referred to as PAtt-Lite, is proposed to improve FER performance under challenging conditions.

26
16 Jun 2023

ReSup: Reliable Label Noise Suppression for Facial Expression Recognition

purpleleaves007/ferdenoise 29 May 2023

To further enhance the reliability of our noise decision results, ReSup uses two networks to jointly achieve noise suppression.

0
29 May 2023

A Dual-Branch Adaptive Distribution Fusion Framework for Real-World Facial Expression Recognition

taylor-xy0827/Ada-DF ICASSP 2023

One auxiliary branch is constructed to obtain the label distributions of samples.

2
05 May 2023

ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning

takihasan/arbex 2 May 2023

We also employ learnable anchor points in the embedding space with label distributions and multi-head self-attention mechanism to optimize performance against weak predictions with reliability balancing, which is a strategy that leverages anchor points, attention scores, and confidence values to enhance the resilience of label predictions.

4
02 May 2023