no code implementations • 21 Jun 2021 • Enda Yu, Dezun Dong, Yemao Xu, Shuo Ouyang, Xiangke Liao
Communication overhead is the key challenge for distributed training.
1 code implementation • 14 May 2020 • Yemao Xu, Dezun Dong, Weixia Xu, Xiangke Liao
To scale out to achieve faster training speed, two update algorithms are mainly applied in the distributed training process, i. e. the Synchronous SGD algorithm (SSGD) and Asynchronous SGD algorithm (ASGD).
no code implementations • 6 Mar 2020 • Shuo Ouyang, Dezun Dong, Yemao Xu, Liquan Xiao
At the algorithm level, we describe how to reduce the number of communication rounds and transmitted bits per round.