The Information-Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Modeling

A variety of learning objectives have been recently proposed for training generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, VAE, $\beta$-VAE, adversarial autoencoders, AVB, and InfoVAE, are Lagrangian duals of the same primal optimization problem... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Batch Normalization
Normalization
Dense Connections
Feedforward Networks
Residual Connection
Skip Connections
Beta-VAE
Generative Models
GAN Least Squares Loss
Loss Functions
Cycle Consistency Loss
Loss Functions
Sigmoid Activation
Activation Functions
Tanh Activation
Activation Functions
Residual Block
Skip Connection Blocks
Convolution
Convolutions
Instance Normalization
Normalization
PatchGAN
Discriminators
BiGAN
Generative Models
ALI
Generative Models
CycleGAN
Generative Models
Softmax
Output Functions
ReLU
Activation Functions
Feedforward Network
Feedforward Networks
Leaky ReLU
Activation Functions
InfoGAN
Generative Models
VAE
Generative Models