Search Results for author: Qiuli Mao

Found 1 papers, 0 papers with code

FlashDecoding++: Faster Large Language Model Inference on GPUs

no code implementations2 Nov 2023 Ke Hong, Guohao Dai, Jiaming Xu, Qiuli Mao, Xiuhong Li, Jun Liu, Kangdi Chen, Yuhan Dong, Yu Wang

A single and static dataflow may lead to a 50. 25% performance loss for GEMMs of different shapes in LLM inference.

Language Modelling Large Language Model

Cannot find the paper you are looking for? You can Submit a new open access paper.