Variational Inference
748 papers with code • 1 benchmarks • 5 datasets
Fitting approximate posteriors with variational inference transforms the inference problem into an optimization problem, where the goal is (typically) to optimize the evidence lower bound (ELBO) on the log likelihood of the data.
Libraries
Use these libraries to find Variational Inference models and implementationsLatest papers with no code
floZ: Evidence estimation from posterior samples with normalizing flows
We propose a novel method (floZ), based on normalizing flows, for estimating the Bayesian evidence (and its numerical uncertainty) from a set of samples drawn from the unnormalized posterior distribution.
Calibrating Bayesian Learning via Regularization, Confidence Minimization, and Selective Inference
This paper proposes an extension of variational inference (VI)-based Bayesian learning that integrates calibration regularization for improved ID performance, confidence minimization for OOD detection, and selective calibration to ensure a synergistic use of calibration regularization and confidence minimization.
Sampling for Model Predictive Trajectory Planning in Autonomous Driving using Normalizing Flows
In this context, normalizing flows originating from the field of variational inference are considered for the generation of sampling distributions, as they model transformations of simple to more complex distributions.
Nonlinear sparse variational Bayesian learning based model predictive control with application to PEMFC temperature control
Variational inference is used by NSVB-MPC to assess the predictive accuracy and make the necessary corrections to quantify system uncertainty.
Extending Mean-Field Variational Inference via Entropic Regularization: Theory and Computation
Variational inference (VI) has emerged as a popular method for approximate inference for high-dimensional Bayesian models.
Convergence of coordinate ascent variational inference for log-concave measures via optimal transport
Mean field variational inference (VI) is the problem of finding the closest product (factorized) measure, in the sense of relative entropy, to a given high-dimensional probability measure $\rho$.
Preventing Model Collapse in Gaussian Process Latent Variable Models
Gaussian process latent variable models (GPLVMs) are a versatile family of unsupervised learning models, commonly used for dimensionality reduction.
Modeling uncertainty for Gaussian Splatting
We present Stochastic Gaussian Splatting (SGS): the first framework for uncertainty estimation using Gaussian Splatting (GS).
Fast and Unified Path Gradient Estimators for Normalizing Flows
Recent work shows that path gradient estimators for normalizing flows have lower variance compared to standard estimators for variational inference, resulting in improved training.
Federated Bayesian Deep Learning: The Application of Statistical Aggregation Methods to Bayesian Models
Aggregation strategies have been developed to pool or fuse the weights and biases of distributed deterministic models; however, modern deterministic deep learning (DL) models are often poorly calibrated and lack the ability to communicate a measure of epistemic uncertainty in prediction, which is desirable for remote sensing platforms and safety-critical applications.