Face Generation

120 papers with code • 0 benchmarks • 4 datasets

Face generation is the task of generating (or interpolating) new faces from an existing dataset.

The state-of-the-art results for this task are located in the Image Generation parent.

( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )

Libraries

Use these libraries to find Face Generation models and implementations

Latest papers with no code

TextGaze: Gaze-Controllable Face Generation with Natural Language

no code yet • 26 Apr 2024

Our work first introduces a text-of-gaze dataset containing over 90k text descriptions spanning a dense distribution of gaze and head poses.

Sketch2Human: Deep Human Generation with Disentangled Geometry and Appearance Control

no code yet • 24 Apr 2024

This work presents Sketch2Human, the first system for controllable full-body human image generation guided by a semantic sketch (for geometry control) and a reference image (for appearance control).

Adversarial Identity Injection for Semantic Face Image Synthesis

no code yet • 16 Apr 2024

Among all the explored techniques, Semantic Image Synthesis (SIS) methods, whose goal is to generate an image conditioned on a semantic segmentation mask, are the most promising, even though preserving the perceived identity of the input subject is not their main concern.

SphereHead: Stable 3D Full-head Synthesis with Spherical Tri-plane Representation

no code yet • 8 Apr 2024

We further introduce a view-image consistency loss for the discriminator to emphasize the correspondence of the camera parameters and the images.

DreamSalon: A Staged Diffusion Framework for Preserving Identity-Context in Editable Face Generation

no code yet • 28 Mar 2024

While large-scale pre-trained text-to-image models can synthesize diverse and high-quality human-centered images, novel challenges arise with a nuanced task of "identity fine editing": precisely modifying specific features of a subject while maintaining its inherent identity and context.

Superior and Pragmatic Talking Face Generation with Teacher-Student Framework

no code yet • 26 Mar 2024

Talking face generation technology creates talking videos from arbitrary appearance and motion signal, with the "arbitrary" offering ease of use but also introducing challenges in practical applications.

FlowVQTalker: High-Quality Emotional Talking Face Generation through Normalizing Flow and Quantization

no code yet • 11 Mar 2024

Specifically, we develop a flow-based coefficient generator that encodes the dynamics of facial emotion into a multi-emotion-class latent space represented as a mixture distribution.

Style2Talker: High-Resolution Talking Head Generation with Emotion Style and Art Style

no code yet • 11 Mar 2024

Although automatically animating audio-driven talking heads has recently received growing interest, previous efforts have mainly concentrated on achieving lip synchronization with the audio, neglecting two crucial elements for generating expressive videos: emotion style and art style.

G4G:A Generic Framework for High Fidelity Talking Face Generation with Fine-grained Intra-modal Alignment

no code yet • 28 Feb 2024

Despite numerous completed studies, achieving high fidelity talking face generation with highly synchronized lip movements corresponding to arbitrary audio remains a significant challenge in the field.

AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation

no code yet • 25 Feb 2024

In this paper, we propose AVI-Talking, an Audio-Visual Instruction system for expressive Talking face generation.