The Complexity of Finding Stationary Points with Stochastic Gradient Descent

ICML 2020  ·  Yoel Drori, Ohad Shamir ·

We study the iteration complexity of stochastic gradient descent (SGD) for minimizing the gradient norm of smooth, possibly nonconvex functions. We provide several results, implying that the $\mathcal{O}(\epsilon^{-4})$ upper bound of Ghadimi and Lan~\cite{ghadimi2013stochastic} (for making the average gradient norm less than $\epsilon$) cannot be improved upon, unless a combination of additional assumptions is made. Notably, this holds even if we limit ourselves to convex quadratic functions. We also show that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria.

PDF Abstract ICML 2020 PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods