Cross-Domain Facial Expression Recognition

4 papers with code • 2 benchmarks • 0 datasets

Cross-domain Facial Expression Recognition (CD-FER) aims to transfer the ability of recognizing facial expression from the source domain to the target domain, when only the training images of the target domain is available (i.e., the annotation of traget domain is missing).

Most implemented papers

Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchmark and Adversarial Graph Learning

HCPLab-SYSU/CD-FER-Benchmark 3 Aug 2020

Although each declares to achieve superior performance, fair comparisons are lacking due to the inconsistent choices of the source/target datasets and feature extractors.

Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition

HCPLab-SYSU/CD-FER-Benchmark 3 Aug 2020

However, most of these works focus on holistic feature adaptation, and they ignore local features that are more transferable across different datasets.

Cluster-level pseudo-labelling for source-free cross-domain facial expression recognition

altndrr/clup 11 Oct 2022

Automatically understanding emotions from visual data is a fundamental task for human behaviour understanding.

Adaptive Global-Local Representation Learning and Selection for Cross-Domain Facial Expression Recognition

yao-papercodes/aglrls 20 Jan 2024

Specifically, the framework consists of separate global-local adversarial learning modules that learn domain-invariant global and local features independently.