Search Results for author: Shengwei Li

Found 3 papers, 1 papers with code

Towards Understanding the Generalizability of Delayed Stochastic Gradient Descent

no code implementations18 Aug 2023 Xiaoge Deng, Li Shen, Shengwei Li, Tao Sun, Dongsheng Li, DaCheng Tao

Stochastic gradient descent (SGD) performed in an asynchronous manner plays a crucial role in training large-scale machine learning models.

Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models

1 code implementation10 Jun 2022 Zhiquan Lai, Shengwei Li, Xudong Tang, Keshi Ge, Weijie Liu, Yabo Duan, Linbo Qiao, Dongsheng Li

These features make it necessary to apply 3D parallelism, which integrates data parallelism, pipeline model parallelism and tensor model parallelism, to achieve high training efficiency.

EmbRace: Accelerating Sparse Communication for Distributed Training of NLP Neural Networks

no code implementations18 Oct 2021 Shengwei Li, Zhiquan Lai, Dongsheng Li, Yiming Zhang, Xiangyu Ye, Yabo Duan

EmbRace introduces Sparsity-aware Hybrid Communication, which integrates AlltoAll and model parallelism into data-parallel training, so as to reduce the communication overhead of highly sparse parameters.

Image Classification Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.