Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers

22 Jul 2022  ·  Jia Li, Jiantao Nie, Dan Guo, Richang Hong, Meng Wang ·

Representation learning and feature disentanglement have recently attracted much research interests in facial expression recognition. The ubiquitous ambiguity of emotion labels is detrimental to those methods based on conventional supervised representation learning. Meanwhile, directly learning the mapping from a facial expression image to an emotion label lacks explicit supervision signals of facial details. In this paper, we propose a novel FER model, called Poker Face Vision Transformer or PF-ViT, to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face without the need for paired images. Here, we regard an expressive face as the comprehensive result of a set of facial muscle movements on one's poker face (i.e., emotionless face), inspired by Facial Action Coding System. The proposed PF-ViT leverages vanilla Vision Transformers, and are firstly pre-trained as Masked Autoencoders on a large facial expression dataset without emotion labels, obtaining excellent representations. It mainly consists of five components: 1) an encoder mapping the facial expression to a complete representation, 2) a separator decomposing the representation into an emotional component and an orthogonal residue, 3) a generator that can reconstruct the expressive face and synthesize the poker face, 4) a discriminator distinguishing the fake face produced by the generator, trained adversarially with the encoder and generator, 5) a classification head recognizing the emotion. Quantitative and qualitative results demonstrate the effectiveness of our method, which trumps the state-of-the-art methods on four popular FER testing sets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Facial Expression Recognition (FER) AffectNet ViT-tiny Accuracy (8 emotion) 58.28 # 24
Facial Expression Recognition (FER) AffectNet Vit-base + MAE Accuracy (8 emotion) 62.42 # 7
Facial Expression Recognition (FER) AffectNet ViT-base Accuracy (8 emotion) 57.99 # 26
Facial Expression Recognition (FER) FER+ ViT-tiny Accuracy 88.56 # 10
Facial Expression Recognition (FER) FER+ Vit-base + MAE Accuracy 90.18 # 5
Facial Expression Recognition (FER) FER+ ViT-base Accuracy 88.91 # 9
Facial Expression Recognition (FER) RAF-DB ViT-tiny Overall Accuracy 87.03 # 19
Facial Expression Recognition (FER) RAF-DB ViT-base + MAE Overall Accuracy 91.07 # 8
Facial Expression Recognition (FER) RAF-DB ViT-base Overall Accuracy 87.22 # 18

Methods