Micro Expression Generation with Thin-plate Spline Motion Model and Face Parsing

Micro-expression generation aims at transfering the expression from the driving videos to the source images, which can be viewed as a motion transfer task. Recently, several works have been proposed to tackle this problem and achieve great performance. However, due to the intrinsic complexity of the face motion and different attributes of face regions, the task still remains challenging. In this paper, we propose an end-to-end unsupervised motion transfer network to tackle this challenge. As the motion of the face is non-rigid, we adopt an effective and flexible thin-plate spline motion estimation method to estimate the optical flow of the face motion. What's more, we find that several faces with eyeglasses show weird deformation in motion transfering. Thus, we introduce face parsing method to pay specific attention to the eyeglasses regions to ensure the reasonability of the deformation. We conduct several experiments on the provided datasets of the ACM MM 2022 micro-expression grand challenge (MEGC2022) and compare our method with several other typical methods. In comparison, our method shows the best performance. We (Team: USTC-IAT-United) also compare our method with other competitors' in MEGC2022, and the expert evaluation results show that our method performs best, which verifies the effectiveness of our method. Our code is available at https://github.com/HowToNameMe/micro-expression

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here