Bayesian Inference
611 papers with code • 1 benchmarks • 7 datasets
Bayesian Inference is a methodology that employs Bayes Rule to estimate parameters (and their full posterior).
Datasets
Latest papers
Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
Machine learning models often perform poorly under subpopulation shifts in the data distribution.
Scalable Spatiotemporal Prediction with Bayesian Neural Fields
Spatiotemporal datasets, which consist of spatially-referenced time series, are ubiquitous in many scientific and business-intelligence applications, such as air pollution monitoring, disease tracking, and cloud-demand forecasting.
Listening to the Noise: Blind Denoising with Gibbs Diffusion
Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the family of noise distributions, and a Monte Carlo sampler to infer the noise parameters.
Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning
Our agent assists a human by modeling them as a cooperative planner who communicates joint plans to the assistant, then performs multimodal Bayesian inference over the human's goal from actions and language, using large language models (LLMs) to evaluate the likelihood of an instruction given a hypothesized plan.
Sequential transport maps using SoS density estimation and $α$-divergences
Transport-based density estimation methods are receiving growing interest because of their ability to efficiently generate samples from the approximated density.
Stochastic Approximation with Biased MCMC for Expectation Maximization
In practice, MCMC-SAEM is often run with asymptotically biased MCMC, for which the consequences are theoretically less understood.
BlackJAX: Composable Bayesian inference in JAX
BlackJAX is a library implementing sampling and variational inference algorithms commonly used in Bayesian computation.
Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning
Running a dedicated model for each task is computationally expensive and therefore there is a great interest in multi-task learning (MTL).
Diffusive Gibbs Sampling
The inadequate mixing of conventional Markov Chain Monte Carlo (MCMC) methods for multi-modal distributions presents a significant challenge in practical applications such as Bayesian inference and molecular dynamics.
Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of Multipliers
Many machine learning applications require operating on a spatially distributed dataset.