Fitting approximate posteriors with variational inference transforms the inference problem into an optimization problem, where the goal is (typically) to optimize the evidence lower bound (ELBO) on the log likelihood of the data.
Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well.
Ranked #1 on Latent Variable Models on 200k Short Texts for Humor Detection (using extra training data)
Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.
Ranked #4 on Unsupervised Image Classification on MNIST
Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice.
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization.
Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities.
First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods.
Ranked #4 on Image Clustering on Imagenet-dog-15
GPflow is a Gaussian process library that uses TensorFlow for its core computations and Python for its front end.