Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling

4 Nov 2016  ·  Hakan Inan, Khashayar Khosravi, Richard Socher ·

Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling Penn Treebank (Word Level) Inan et al. (2016) - Variational RHN Validation perplexity 68.1 # 27
Test perplexity 66.0 # 34
Language Modelling WikiText-2 Inan et al. (2016) - Variational LSTM (tied) (h=650) Validation perplexity 92.3 # 26
Test perplexity 87.7 # 36
Language Modelling WikiText-2 Inan et al. (2016) - Variational LSTM (tied) (h=650) + augmented loss Validation perplexity 91.5 # 25
Test perplexity 87.0 # 35

Methods