Search Results for author: Yin-Peng Xie

Found 3 papers, 0 papers with code

Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training

no code implementations28 Jul 2020 Shen-Yi Zhao, Chang-Wei Shi, Yin-Peng Xie, Wu-Jun Li

Empirical results on deep learning verify that when adopting the same large batch size, SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods.

Stagewise Enlargement of Batch Size for SGD-based Learning

no code implementations26 Feb 2020 Shen-Yi Zhao, Yin-Peng Xie, Wu-Jun Li

We theoretically prove that, compared to classical stagewise SGD which decreases learning rate by stage, \mbox{SEBS} can reduce the number of parameter updates without increasing generalization error.

Global Momentum Compression for Sparse Communication in Distributed Learning

no code implementations30 May 2019 Chang-Wei Shi, Shen-Yi Zhao, Yin-Peng Xie, Hao Gao, Wu-Jun Li

With the rapid growth of data, distributed momentum stochastic gradient descent~(DMSGD) has been widely used in distributed learning, especially for training large-scale deep models.

Cannot find the paper you are looking for? You can Submit a new open access paper.