ObamaNet: Photo-realistic lip-sync from text

We present ObamaNet, the first architecture that generates both audio and synchronized photo-realistic lip-sync videos from any new text. Contrary to other published lip-sync approaches, ours is only composed of fully trainable neural modules and does not rely on any traditional computer graphics methods... (read more)

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Concatenated Skip Connection
Skip Connections
PatchGAN
Discriminators
ReLU
Activation Functions
Batch Normalization
Normalization
Convolution
Convolutions
Leaky ReLU
Activation Functions
Dropout
Regularization
Pix2Pix
Generative Models
Sigmoid Activation
Activation Functions
Tanh Activation
Activation Functions
LSTM
Recurrent Neural Networks