Search Results for author: Yemao Xu

Found 3 papers, 1 papers with code

OD-SGD: One-step Delay Stochastic Gradient Descent for Distributed Training

1 code implementation14 May 2020 Yemao Xu, Dezun Dong, Weixia Xu, Xiangke Liao

To scale out to achieve faster training speed, two update algorithms are mainly applied in the distributed training process, i. e. the Synchronous SGD algorithm (SSGD) and Asynchronous SGD algorithm (ASGD).

Communication optimization strategies for distributed deep neural network training: A survey

no code implementations6 Mar 2020 Shuo Ouyang, Dezun Dong, Yemao Xu, Liquan Xiao

At the algorithm level, we describe how to reduce the number of communication rounds and transmitted bits per round.

Cannot find the paper you are looking for? You can Submit a new open access paper.