Computing the quality of the Laplace approximation

24 Nov 2017  ·  Guillaume P. Dehaene ·

Bayesian inference requires approximation methods to become computable, but for most of them it is impossible to quantify how close the approximation is to the true posterior. In this work, we present a theorem upper-bounding the KL divergence between a log-concave target density $f\left(\boldsymbol{\theta}\right)$ and its Laplace approximation $g\left(\boldsymbol{\theta}\right)$. The bound we present is computable: on the classical logistic regression model, we find our bound to be almost exact as long as the dimensionality of the parameter space is high. The approach we followed in this work can be extended to other Gaussian approximations, as we will do in an extended version of this work, to be submitted to the Annals of Statistics. It will then become a critical tool for characterizing whether, for a given problem, a given Gaussian approximation is suitable, or whether a more precise alternative method should be used instead.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods