Improving Neural Language Models with a Continuous Cache

13 Dec 2016  ·  Edouard Grave, Armand Joulin, Nicolas Usunier ·

We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling WikiText-103 Neural cache model (size = 100) Test perplexity 44.8 # 81
Language Modelling WikiText-103 LSTM Test perplexity 48.7 # 85
Language Modelling WikiText-103 Neural cache model (size = 2,000) Test perplexity 40.8 # 80
Language Modelling WikiText-2 Grave et al. (2016) - LSTM Test perplexity 99.3 # 37
Language Modelling WikiText-2 Grave et al. (2016) - LSTM + continuous cache pointer Test perplexity 68.9 # 33

Methods