Search Results for author: Zeyu Leo Liu

Found 1 papers, 0 papers with code

Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model

no code implementations23 May 2023 Zeyu Leo Liu, Tim Dettmers, Xi Victoria Lin, Veselin Stoyanov, Xian Li

Large and sparse feed-forward layers (S-FFN) such as Mixture-of-Experts (MoE) have proven effective in scaling up Transformers model size for \textit{pretraining} large language models.

Avg Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.