Bayesian Inference
621 papers with code • 1 benchmarks • 7 datasets
Bayesian Inference is a methodology that employs Bayes Rule to estimate parameters (and their full posterior).
Datasets
Most implemented papers
Weight Uncertainty in Neural Networks
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost.
Semi-Supervised Learning with Deep Generative Models
The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis.
Variational Autoencoders for Collaborative Filtering
This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research. We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation.
Bayesian regression and Bitcoin
In this paper, we discuss the method of Bayesian regression and its efficacy for predicting price variation of Bitcoin, a recently popularized virtual, cryptographic currency.
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout.
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization.
Variational Dropout and the Local Reparameterization Trick
Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models.
Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows
We present Sequential Neural Likelihood (SNL), a new method for Bayesian inference in simulator models, where the likelihood is intractable but simulating data from the model is possible.
A Simple Baseline for Bayesian Uncertainty in Deep Learning
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning.