no code implementations • COLING 2022 • Rajiv Movva, Jinhao Lei, Shayne Longpre, Ajay Gupta, Chris DuBois
Our work quantitatively demonstrates that combining compression methods can synergistically reduce model size, and that practitioners should prioritize (1) quantization, (2) knowledge distillation, and (3) pruning to maximize accuracy vs. model size tradeoffs.
3 code implementations • 9 Nov 2019 • Iddo Drori, Darshan Thaker, Arjun Srivatsa, Daniel Jeong, Yueqi Wang, Linyong Nan, Fan Wu, Dimitri Leggas, Jinhao Lei, Weiyi Lu, Weilong Fu, Yuan Gao, Sashank Karri, Anand Kannan, Antonio Moretti, Mohammed AlQuraishi, Chen Keasar, Itsik Pe'er
Our dataset consists of amino acid sequences, Q8 secondary structures, position specific scoring matrices, multiple sequence alignment co-evolutionary features, backbone atom distance matrices, torsion angles, and 3D coordinates.
no code implementations • ACL 2017 • Qiao Qian, Minlie Huang, Jinhao Lei, Xiaoyan Zhu
This paper deals with sentence-level sentiment classification.
no code implementations • 12 Nov 2016 • Qiao Qian, Minlie Huang, Jinhao Lei, Xiaoyan Zhu
In this paper, we propose simple models trained with sentence-level annotation, but also attempt to generating linguistically coherent representations by employing regularizers that model the linguistic role of sentiment lexicons, negation words, and intensity words.