1 code implementation • 2 Nov 2023 • Hanwen Chang, Haihao Shen, Yiyang Cai, Xinyu Ye, Zhenzhong Xu, Wenhua Cheng, Kaokao Lv, Weiwei Zhang, Yintong Lu, Heng Guo
Diffusion models have gained popularity for generating images from textual descriptions.
2 code implementations • 1 Nov 2023 • Haihao Shen, Hanwen Chang, Bo Dong, Yu Luo, Hengyu Meng
Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks.
1 code implementation • 17 Oct 2023 • Wenhua Cheng, Yiyang Cai, Kaokao Lv, Haihao Shen
As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy.
2 code implementations • 26 Sep 2023 • Haihao Shen, Naveen Mellempudi, Xin He, Qun Gao, Chang Wang, Mengni Wang
Recent advances in deep learning methods such as LLMs and Diffusion models have created a need for improved quantization methods that can meet the computational demands of these modern architectures while maintaining accuracy.
1 code implementation • 11 Sep 2023 • Wenhua Cheng, Weiwei Zhang, Haihao Shen, Yiyang Cai, Xin He, Kaokao Lv
As the number of bits decreases, the quantization grid broadens, thus emphasizing the importance of up and down rounding.
1 code implementation • 28 Jun 2023 • Haihao Shen, Hengyu Meng, Bo Dong, Zhe Wang, Ofir Zafrir, Yi Ding, Yu Luo, Hanwen Chang, Qun Gao, Ziheng Wang, Guy Boudoukh, Moshe Wasserblat
We apply our sparse accelerator on widely-used Transformer-based language models including Bert-Mini, DistilBERT, Bert-Base, and BERT-Large.
2 code implementations • 31 Oct 2022 • Shira Guskin, Moshe Wasserblat, Chang Wang, Haihao Shen
Our quantized length-adaptive MiniLM model (QuaLA-MiniLM) is trained only once, dynamically fits any inference scenario, and achieves an accuracy-efficiency trade-off superior to any other efficient approaches per any computational budget on the SQuAD1. 1 dataset (up to x8. 8 speedup with <1% accuracy loss).
1 code implementation • 27 Oct 2022 • Haihao Shen, Ofir Zafrir, Bo Dong, Hengyu Meng, Xinyu Ye, Zhe Wang, Yi Ding, Hanwen Chang, Guy Boudoukh, Moshe Wasserblat
In this work, we propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators.
2 code implementations • 10 Nov 2021 • Ofir Zafrir, Ariel Larey, Guy Boudoukh, Haihao Shen, Moshe Wasserblat
We show how the compressed sparse pre-trained models we trained transfer their knowledge to five different downstream natural language tasks with minimal accuracy loss.
Ranked #2 on Natural Language Inference on MultiNLI Dev
no code implementations • ICLR 2019 • Haihao Shen, Jiong Gong, Xiaoli Liu, Guoming Zhang, Ge Jin, and Eric Lin
High throughput and low latency inference of deep neural networks are critical for the deployment of deep learning applications.
1 code implementation • 4 May 2018 • Jiong Gong, Haihao Shen, Guoming Zhang, Xiaoli Liu, Shane Li, Ge Jin, Niharika Maheshwari, Evarist Fomenko, Eden Segal
High throughput and low latency inference of deep neural networks are critical for the deployment of deep learning applications.