Search Results for author: Fanzhuang Meng

Found 2 papers, 0 papers with code

AntBatchInfer: Elastic Batch Inference in the Kubernetes Cluster

no code implementations15 Apr 2024 Siyuan Li, Youshao Xiao, Fanzhuang Meng, Lin Ju, Lei Liang, Lin Wang, Jun Zhou

Offline batch inference is a common task in the industry for deep learning applications, but it can be challenging to ensure stability and performance when dealing with large amounts of data and complicated inference pipelines.

Rethinking Memory and Communication Cost for Efficient Large Language Model Training

no code implementations9 Oct 2023 Chan Wu, Hanxiao Zhang, Lin Ju, Jinjing Huang, Youshao Xiao, ZhaoXin Huan, Siyuan Li, Fanzhuang Meng, Lei Liang, Xiaolu Zhang, Jun Zhou

In this paper, we rethink the impact of memory consumption and communication costs on the training speed of large language models, and propose a memory-communication balanced strategy set Partial Redundancy Optimizer (PaRO).

Language Modelling Large Language Model

Cannot find the paper you are looking for? You can Submit a new open access paper.