Search Results for author: Haoyu Sheng

Found 1 papers, 0 papers with code

Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization

no code implementations8 Apr 2019 Yangyang Shi, Mei-Yuh Hwang, Xin Lei, Haoyu Sheng

Using knowledge distillation with trust regularization, we reduce the parameter size to a third of that of the previously published best model while maintaining the state-of-the-art perplexity result on Penn Treebank data.

Knowledge Distillation Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.