1 code implementation • 24 Jul 2023 • Pierre Bras, Gilles Pagès
We propose a new algorithm for variance reduction when estimating $f(X_T)$ where $X$ is the solution to some stochastic differential equation and $f$ is a test function.
no code implementations • 8 Mar 2023 • Pierre Bras
For $V : \mathbb{R}^d \to \mathbb{R}$ coercive, we study the convergence rate for the $L^1$-distance of the empiric minimizer, which is the true minimum of the function $V$ sampled with noise with a finite number $n$ of samples, to the minimum of $V$.
1 code implementation • 27 Dec 2022 • Pierre Bras
Training a very deep neural network is a challenging task, as the deeper a neural network is, the more non-linear it is.
1 code implementation • 22 Dec 2022 • Pierre Bras, Gilles Pagès
Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which add noise to the classic gradient descent, are known to improve the training of neural networks in some cases where the neural network is very deep.
no code implementations • 27 Jan 2021 • Pierre Bras
We assume instead that the minimum is strictly polynomial and give a higher order nested expansion of $f$ at $x^\star$, which depends on every coordinate.
Probability