Communicate Then Adapt: An Effective Decentralized Adaptive Method for Deep Training

29 Sep 2021  ·  Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Yingya Zhang, Pan Pan, Wotao Yin ·

Decentralized adaptive gradient methods, in which each node averages only with its neighbors, are critical to save communication and wall-clock training time in deep learning tasks. While different in concrete recursions, existing decentralized adaptive methods share the same algorithm structure: each node scales its gradient with information of the past squared gradients (which is referred to as the adaptive step) before or while it communicates with neighbors. In this paper, we identify the limitation of such adapt-then/while-communicate structure: it will make the developed algorithms highly sensitive to heterogeneous data distributions, and hence deviate their limiting points from the stationary solution. To overcome this limitation, we propose an effective decentralized adaptive method with a communicate-then-adapt structure, in which each node conducts the adaptive step after finishing the neighborhood communications. The new method is theoretically guaranteed to approach to the stationary solution in the non-convex scenario. Experimental results on a variety of CV/NLP tasks show that our method has a clear superiority to other existing decentralized adaptive methods.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here