An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models

14 Oct 2020 Zihan Zhao Yuncong Liu Lu Chen Qi Liu Rao Ma Kai Yu

Recently, pre-trained language models like BERT have shown promising performance on multiple natural language processing tasks. However, the application of these models has been limited due to their huge size... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper