Bayesian Inference
624 papers with code • 1 benchmarks • 7 datasets
Bayesian Inference is a methodology that employs Bayes Rule to estimate parameters (and their full posterior).
Libraries
Use these libraries to find Bayesian Inference models and implementationsDatasets
Latest papers
A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry
Bayesian neural networks (BNNs) are a principled approach to modeling predictive uncertainties in deep learning, which are important in safety-critical applications.
Tractable Function-Space Variational Inference in Bayesian Neural Networks
Recognizing that the primary object of interest in most settings is the distribution over functions induced by the posterior distribution over neural network parameters, we frame Bayesian inference in neural networks explicitly as inferring a posterior distribution over functions and propose a scalable function-space variational inference method that allows incorporating prior information and results in reliable predictive uncertainty estimates.
Diffusion Models With Learned Adaptive Noise
Diffusion models have gained traction as powerful algorithms for synthesizing high-quality images.
Gaussian process learning of nonlinear dynamics
Through a Bayesian scheme, a probabilistic estimate of the model parameters is given by the posterior distribution, and thus a quantification is facilitated for uncertainties from noisy state data and the learning process.
Uncertainty Quantification in Heterogeneous Treatment Effect Estimation with Gaussian-Process-Based Partially Linear Model
We propose a Bayesian inference framework that quantifies the uncertainty in treatment effect estimation to support decision-making in a relatively small sample size setting.
Automatic Rao-Blackwellization for Sequential Monte Carlo with Belief Propagation
Exact Bayesian inference on state-space models~(SSM) is in general untractable, and unfortunately, basic Sequential Monte Carlo~(SMC) methods do not yield correct approximations for complex models.
Calibrated One Round Federated Learning with Bayesian Inference in the Predictive Space
To improve scalability for larger models, one common Bayesian approach is to approximate the global predictive posterior by multiplying local predictive posteriors.
Uncertainty Quantification and Propagation in Surrogate-based Bayesian Inference
This is a task where the propagation of surrogate uncertainty is especially relevant, because failing to account for it may lead to biased and/or overconfident estimates of the parameters of interest.
nbi: the Astronomer's Package for Neural Posterior Estimation
We identify three critical issues: the need for custom featurizer networks tailored to the observed data, the inference inexactness, and the under-specification of physical forward models.
Distilled Self-Critique of LLMs with Synthetic Data: a Bayesian Perspective
This paper proposes an interpretation of RLAIF as Bayesian inference by introducing distilled Self-Critique (dSC), which refines the outputs of a LLM through a Gibbs sampler that is later distilled into a fine-tuned model.