no code implementations • 5 Feb 2024 • Sobihan Surendran, Antoine Godichon-Baggioni, Adeline Fermanian, Sylvain Le Corff
This paper provides a comprehensive non-asymptotic analysis of SGD with biased gradients and adaptive steps for convex and non-convex smooth functions.
no code implementations • 15 Jan 2024 • Antoine Godichon-Baggioni, Wei Lu, Bruno Portier
This paper addresses second-order stochastic optimization for estimating the minimizer of a convex function written as an expectation.
no code implementations • 3 Apr 2023 • Antoine Godichon-Baggioni, Wei Lu
In the context of large samples, a small number of individuals might spoil basic statistical indicators like the mean.
no code implementations • 1 Mar 2023 • Antoine Godichon-Baggioni, Pierre Tarrago
In stochastic optimization, a common tool to deal sequentially with large sample is to consider the well-known stochastic gradient algorithm.
no code implementations • 25 May 2022 • Antoine Godichon-Baggioni, Nicklas Werge, Olivier Wintenberger
This paper addresses stochastic optimization in a streaming setting with time-dependent and biased gradient estimates.
no code implementations • 15 Sep 2021 • Antoine Godichon-Baggioni, Nicklas Werge, Olivier Wintenberger
We provide non-asymptotic convergence rates of various gradient-based algorithms; this includes the famous Stochastic Gradient (SG) descent (a. k. a.