From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos

9 Dec 2023  ·  Yin Chen, Jia Li, Shiguang Shan, Meng Wang, Richang Hong ·

Dynamic facial expression recognition (DFER) in the wild is still hindered by data limitations, e.g., insufficient quantity and diversity of pose, occlusion and illumination, as well as the inherent ambiguity of facial expressions. In contrast, static facial expression recognition (SFER) currently shows much higher performance and can benefit from more abundant high-quality training data. Moreover, the appearance features and dynamic dependencies of DFER remain largely unexplored. To tackle these challenges, we introduce a novel Static-to-Dynamic model (S2D) that leverages existing SFER knowledge and dynamic information implicitly encoded in extracted facial landmark-aware features, thereby significantly improving DFER performance. Firstly, we build and train an image model for SFER, which incorporates a standard Vision Transformer (ViT) and Multi-View Complementary Prompters (MCPs) only. Then, we obtain our video model (i.e., S2D), for DFER, by inserting Temporal-Modeling Adapters (TMAs) into the image model. MCPs enhance facial expression features with landmark-aware features inferred by an off-the-shelf facial landmark detector. And the TMAs capture and model the relationships of dynamic changes in facial expressions, effectively extending the pre-trained image model for videos. Notably, MCPs and TMAs only increase a fraction of trainable parameters (less than +10\%) to the original image model. Moreover, we present a novel Emotion-Anchors (i.e., reference samples for each emotion category) based Self-Distillation Loss to reduce the detrimental influence of ambiguous emotion labels, further enhancing our S2D. Experiments conducted on popular SFER and DFER datasets show that we achieve the state of the art.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Facial Expression Recognition (FER) AffectNet S2D Accuracy (7 emotion) 67.62 # 3
Accuracy (8 emotion) 63.06 # 4
Dynamic Facial Expression Recognition DFEW S2D WAR 76.03 # 2
UAR 65.45 # 2
Dynamic Facial Expression Recognition FERV39k S2D WAR 52.56 # 1
UAR 43.97 # 1
Dynamic Facial Expression Recognition MAFW S2D WAR 57.37 # 2
UAR 43.40 # 2
Facial Expression Recognition (FER) RAF-DB S2D Overall Accuracy 92.57 # 2

Methods