no code implementations • 14 Apr 2024 • Tian Jin, Wanzin Yazar, Zifei Xu, Sayeh Sharify, Xin Wang
We demonstrate that using this custom CUDA kernel improves the throughput of LLM inference by 28%.
Language Modelling Large Language Model