Partially Shuffling the Training Data to Improve Language Models

arXiv 2019  ·  Ofir Press ·

Although SGD requires shuffling the training data between epochs, currently none of the word-level language modeling systems do this. Naively shuffling all sentences in the training data would not permit the model to learn inter-sentence dependencies. Here we present a method that partially shuffles the training data between epochs. This method makes each batch random, while keeping most sentence ordering intact. It achieves new state of the art results on word-level language modeling on both the Penn Treebank and WikiText-2 datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling Penn Treebank (Word Level) AWD-LSTM-DOC + Partial Shuffle Validation perplexity 53.79 # 12
Test perplexity 52.0 # 15
Params 23M # 19
Language Modelling Penn Treebank (Word Level) AWD-LSTM-MoS + Partial Shuffle Validation perplexity 55.89 # 15
Test perplexity 53.92 # 18
Params 22M # 23
Language Modelling WikiText-2 AWD-LSTM-MoS + Partial Shuffle Validation perplexity 62.38 # 18
Test perplexity 59.98 # 25
Number of params 35M # 12
Language Modelling WikiText-2 AWD-LSTM-DOC + Partial Shuffle Validation perplexity 60.16 # 16
Test perplexity 57.85 # 23
Number of params 37M # 9

Methods