Probabilistic Programming
87 papers with code • 0 benchmarks • 0 datasets
Probabilistic programming languages are designed to describe probabilistic models and then perform inference in those models. PPLs are closely related to graphical models and Bayesian networks, but are more expressive and flexible.
( Image credit: Michael Betancourt )
Benchmarks
These leaderboards are used to track progress in Probabilistic Programming
Libraries
Use these libraries to find Probabilistic Programming models and implementationsLatest papers
Black Box Variational Inference with a Deterministic Objective: Faster, More Accurate, and Even More Black Box
We show on a variety of real-world problems that DADVI reliably finds good solutions with default settings (unlike ADVI) and, together with LR covariances, is typically faster and more accurate than standard ADVI.
Automatically Marginalized MCMC in Probabilistic Programming
Hamiltonian Monte Carlo (HMC) is a powerful algorithm to sample latent variables from Bayesian models.
TreeFlow: probabilistic programming and automatic differentiation for phylogenetics
Probabilistic programming frameworks are powerful tools for statistical modelling and inference.
Differentiable Quantum Programming with Unbounded Loops
The emergence of variational quantum applications has led to the development of automatic differentiation techniques in quantum computing.
Nonparametric Involutive Markov Chain Monte Carlo
A challenging problem in probabilistic programming is to develop inference algorithms that work for arbitrary programs in a universal probabilistic programming language (PPL).
Ice Core Dating using Probabilistic Programming
Ice cores record crucial information about past climate.
Improved Marginal Unbiased Score Expansion (MUSE) via Implicit Differentiation
We apply the technique of implicit differentiation to boost performance, reduce numerical error, and remove required user-tuning in the Marginal Unbiased Score Expansion (MUSE) algorithm for hierarchical Bayesian inference.
Robust leave-one-out cross-validation for high-dimensional Bayesian models
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive accuracy.
Borch: A Deep Universal Probabilistic Programming Language
Ever since the Multilayered Perceptron was first introduced the connectionist community has struggled with the concept of uncertainty and how this could be represented in these types of models.
Language Model Cascades
Prompted models have demonstrated impressive few-shot learning abilities.