Variational Inference
750 papers with code • 1 benchmarks • 5 datasets
Fitting approximate posteriors with variational inference transforms the inference problem into an optimization problem, where the goal is (typically) to optimize the evidence lower bound (ELBO) on the log likelihood of the data.
Libraries
Use these libraries to find Variational Inference models and implementationsLatest papers
Sequential Monte Carlo for Inclusive KL Minimization in Amortized Variational Inference
As an alternative, we propose SMC-Wake, a procedure for fitting an amortized variational approximation that uses likelihood-tempered sequential Monte Carlo samplers to estimate the gradient of the inclusive KL divergence.
An Efficient Difference-of-Convex Solver for Privacy Funnel
The proposed DC separation results in a closed-form update equation, which allows straightforward application to both known and unknown distribution settings.
Stable Training of Normalizing Flows for High-dimensional Variational Inference
However, in practice, training deep normalizing flows for approximating high-dimensional posterior distributions is often infeasible due to the high variance of the stochastic gradients.
Batch and match: black-box variational inference with a score-based divergence
We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.
BlackJAX: Composable Bayesian inference in JAX
BlackJAX is a library implementing sampling and variational inference algorithms commonly used in Bayesian computation.
Training Bayesian Neural Networks with Sparse Subspace Variational Inference
Bayesian neural networks (BNNs) offer uncertainty quantification but come with the downside of substantially increased training and inference costs.
The VampPrior Mixture Model
Current clustering priors for deep latent variable models (DLVMs) require defining the number of clusters a-priori and are susceptible to poor initializations.
Bayesian Deep Learning for Remaining Useful Life Estimation via Stein Variational Gradient Descent
In particular, we show through experimental studies on simulated run-to-failure turbofan engine degradation data that Bayesian deep learning models trained via Stein variational gradient descent consistently outperform with respect to convergence speed and predictive performance both the same models trained via parametric variational inference and their frequentist counterparts trained via backpropagation.
Efficient Nonparametric Tensor Decomposition for Binary and Count Data
Finally, to address the computational issue of GPs, we enhance the model by incorporating sparse orthogonal variational inference of inducing points, which offers a more effective covariance approximation within GPs and stochastic natural gradient updates for nonparametric models.
DualVAE: Dual Disentangled Variational AutoEncoder for Recommendation
To address this problem, we propose a Dual Disentangled Variational AutoEncoder (DualVAE) for collaborative recommendation, which combines disentangled representation learning with variational inference to facilitate the generation of implicit interaction data.