Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency

Recent learning-based approaches, in which models are trained by single-view images have shown promising results for monocular 3D face reconstruction, but they suffer from the ill-posed face pose and depth ambiguity issue. In contrast to previous works that only enforce 2D feature constraints, we propose a self-supervised training architecture by leveraging the multi-view geometry consistency, which provides reliable constraints on face pose and depth estimation. We first propose an occlusion-aware view synthesis method to apply multi-view geometry consistency to self-supervised learning. Then we design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss. Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions. Comprehensive experiments on the face alignment and 3D face reconstruction benchmarks have demonstrated superiority over state-of-the-art methods. Our code and model are released in https://github.com/jiaxiangshang/MGCNet.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Face Reconstruction NoW Benchmark MGCNet Mean Reconstruction Error (mm) 1.87 # 14
Stdev Reconstruction Error (mm) 2.63 # 17
Median Reconstruction Error 1.31 # 14
3D Face Reconstruction REALY MGCNet @nose 1.827 (±0.383) # 12
@mouth 1.409 (±0.418) # 4
@forehead 2.248 (±0.508) # 8
@cheek 1.665 (±0.644) # 17
all 1.787 # 8
3D Face Reconstruction REALY (side-view) MGCNet @nose 1.827 (±0.383) # 9
all 1.787 # 6
@mouth 1.409 (±0.418) # 2
@forehead 2.248 (±0.508) # 6
@cheek 1.665 (±0.644) # 13

Methods


No methods listed for this paper. Add relevant methods here