Generative Models

Adversarially Learned Inference

Introduced by Dumoulin et al. in Adversarially Learned Inference

Adversarially Learned Inference (ALI) is a generative modelling approach that casts the learning of both an inference machine (or encoder) and a deep directed generative model (or decoder) in an GAN-like adversarial framework. A discriminator is trained to discriminate joint samples of the data and the corresponding latent variable from the encoder (or approximate posterior) from joint samples from the decoder while in opposition, the encoder and the decoder are trained together to fool the discriminator. Not is the discriminator asked to distinguish synthetic samples from real data, but it is required it to distinguish between two joint distributions over the data space and the latent variables.

An ALI differs from a GAN in two ways:

  • The generator has two components: the encoder, $G_{z}\left(\mathbf{x}\right)$, which maps data samples $x$ to $z$-space, and the decoder $G_{x}\left(\mathbf{z}\right)$, which maps samples from the prior $p\left(\mathbf{z}\right)$ (a source of noise) to the input space.
  • The discriminator is trained to distinguish between joint pairs $\left(\mathbf{x}, \tilde{\mathbf{z}} = G_{\mathbf{x}}\left(\mathbf{x}\right)\right)$ and $\left(\tilde{\mathbf{x}} = G_{x}\left(\mathbf{z}\right), \mathbf{z}\right)$, as opposed to marginal samples $\mathbf{x} \sim q\left(\mathbf{x}\right)$ and $\tilde{\mathbf{x}} ∼ p\left(\mathbf{x}\right)$.
Source: Adversarially Learned Inference

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Cloud Computing 1 20.00%
Management 1 20.00%
Face Recognition 1 20.00%
Image Generation 1 20.00%
Image-to-Image Translation 1 20.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories