On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

6 Dec 2018  ·  M. Barkhagen, N. H. Chau, É. Moulines, M. Rásonyi, S. Sabanis, Y. Zhang ·

We study the problem of sampling from a probability distribution $\pi$ on $\rset^d$ which has a density \wrt\ the Lebesgue measure known up to a normalization factor $x \mapsto \rme^{-U(x)} / \int_{\rset^d} \rme^{-U(y)} \rmd y$. We analyze a sampling method based on the Euler discretization of the Langevin stochastic differential equations under the assumptions that the potential $U$ is continuously differentiable, $\nabla U$ is Lipschitz, and $U$ is strongly concave. We focus on the case where the gradient of the log-density cannot be directly computed but unbiased estimates of the gradient from possibly dependent observations are available. This setting can be seen as a combination of a stochastic approximation (here stochastic gradient) type algorithms with discretized Langevin dynamics. We obtain an upper bound of the Wasserstein-2 distance between the law of the iterates of this algorithm and the target distribution $\pi$ with constants depending explicitly on the Lipschitz and strong convexity constants of the potential and the dimension of the space. Finally, under weaker assumptions on $U$ and its gradient but in the presence of independent observations, we obtain analogous results in Wasserstein-2 distance.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here