Second-Order Convergence of Asynchronous Parallel Stochastic Gradient Descent: When Is the Linear Speedup Achieved?

14 Oct 2019  ·  Lifu Wang, Bo Shen, Ning Zhao ·

In machine learning, asynchronous parallel stochastic gradient descent (APSGD) is broadly used to speed up the training process through multi-workers. Meanwhile, the time delay of stale gradients in asynchronous algorithms is generally proportional to the total number of workers, which brings additional deviation from the accurate gradient due to using delayed gradients. This may have a negative influence on the convergence of the algorithm. One may ask: How many workers can we use at most to achieve a good convergence and the linear speedup? In this paper, we consider the second-order convergence of asynchronous algorithms in non-convex optimization. We investigate the behaviors of APSGD with consistent read near strictly saddle points and provide a theoretical guarantee that if the total number of workers is bounded by $\widetilde{O}(K^{1/3}M^{-1/3})$ ($K$ is the total steps and $M$ is the mini-batch size), APSGD will converge to good stationary points ($||\nabla f(x)||\leq \epsilon, \nabla^2 f(x)\succeq -\sqrt{\epsilon}\bm{I}, \epsilon^2\leq O(\sqrt{\frac{1}{MK}}) $) and the linear speedup is achieved. Our works give the first theoretical guarantee on the second-order convergence for asynchronous algorithms. The technique we provide can be generalized to analyze other types of asynchronous algorithms to understand the behaviors of asynchronous algorithms in distributed asynchronous parallel training.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods