The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning

31 May 2019  ·  Bonggun Shin, Hao Yang, Jinho D. Choi ·

Recent advances in deep learning have facilitated the demand of neural models for real applications. In practice, these applications often need to be deployed with limited resources while keeping high accuracy. This paper touches the core of neural models in NLP, word embeddings, and presents a new embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy. A novel distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models. In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it eighty times faster and lighter than other typical ensemble methods. All models are evaluated on seven document classification datasets and show a significant advantage over the teacher models for most cases. Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentiment Analysis CR STM+TSED+PT+2L Accuracy 82.73 # 9
Sentiment Analysis MPQA STM+TSED+PT+2L Accuracy 89.83 # 2
Sentiment Analysis MR STM+TSED+PT+2L Accuracy 80.09 # 10
Sentiment Analysis SST-2 Binary classification STM+TSED+PT+2L Accuracy 86.95 # 75
Sentiment Analysis SST-5 Fine-grained classification STM+TSED+PT+2L Accuracy 49.14 # 22
Subjectivity Analysis SUBJ STM+TSED+PT+2L Accuracy 92.34 # 14
Text Classification TREC-6 STM+TSED+PT+2L Error 7.04 # 14

Methods


No methods listed for this paper. Add relevant methods here