Towards Realistic Generative 3D Face Models

In recent years, there has been significant progress in 2D generative face models fueled by applications such as animation, synthetic data generation, and digital avatars. However, due to the absence of 3D information, these 2D models often struggle to accurately disentangle facial attributes like pose, expression, and illumination, limiting their editing capabilities. To address this limitation, this paper proposes a 3D controllable generative face model to produce high-quality albedo and precise 3D shape leveraging existing 2D generative models. By combining 2D face generative models with semantic face manipulation, this method enables editing of detailed 3D rendered faces. The proposed framework utilizes an alternating descent optimization approach over shape and albedo. Differentiable rendering is used to train high-quality shapes and albedo without 3D supervision. Moreover, this approach outperforms the state-of-the-art (SOTA) methods in the well-known NoW benchmark for shape reconstruction. It also outperforms the SOTA reconstruction models in recovering rendered faces' identities across novel poses by an average of 10%. Additionally, the paper demonstrates direct control of expressions in 3D faces by exploiting latent space leading to text-based editing of 3D faces.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Face Reconstruction REALY AlbedoGAN @nose 1.656 (±0.374) # 5
@mouth 2.087 (±0.839) # 17
@forehead 2.102 (±0.512) # 7
@cheek 1.141 (±0.303) # 5
all 1.746 # 7
3D Face Reconstruction REALY (side-view) AlbedoGAN @nose 1.576 (±0.338) # 4
all 1.762 # 5
@mouth 2.218 (±0.952) # 13
@forehead 2.142 (±0.554) # 5
@cheek 1.112 (±0.278) # 3

Methods


No methods listed for this paper. Add relevant methods here