Gradient-based Adaptive Markov Chain Monte Carlo

NeurIPS 2019 1 code implementation

We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets.

Stein Point Markov Chain Monte Carlo

9 May 20191 code implementation

Stein Points are a class of algorithms for this task, which proceed by sequentially minimising a Stein discrepancy between the empirical measure and the target and, hence, require the solution of a non-convex optimisation problem to obtain each new point.

BAYESIAN INFERENCE

On Markov chain Monte Carlo methods for tall data

11 May 20151 code implementation

Finally, we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent.

BAYESIAN INFERENCE

Pseudo-Extended Markov chain Monte Carlo

NeurIPS 2019 1 code implementation

In this paper, we introduce the pseudo-extended MCMC method as a simple approach for improving the mixing of the MCMC sampler for multi-modal posterior distributions.

Pseudo-extended Markov chain Monte Carlo

NeurIPS 2019 1 code implementation

In this paper, we introduce the pseudo-extended MCMC method as a simple approach for improving the mixing of the MCMC sampler for multi-modal posterior distributions.

sgmcmc: An R Package for Stochastic Gradient Markov Chain Monte Carlo

2 Oct 20171 code implementation

To do this, the package uses the software library TensorFlow, which has a variety of statistical distributions and mathematical operations as standard, meaning a wide class of models can be built using this framework.

BAYESIAN INFERENCE

Gradient-based Adaptive Markov Chain Monte Carlo

NeurIPS 2019 1 code implementation

We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets.

Improving Sampling from Generative Autoencoders with Markov Chains

28 Oct 20161 code implementation

Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model.

Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model

NeurIPS 2019 1 code implementation

We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.

PROBABILISTIC PROGRAMMING

Generalizing Hamiltonian Monte Carlo with Neural Networks

ICLR 2018 2 code implementations

We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution.