Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Language Modelling enwik8 Transformer-XL (24 layers) Bit per Character (BPC) 0.99 # 12
Number of params 277M # 2
Language Modelling enwik8 Transformer-XL (12 layers) Bit per Character (BPC) 1.06 # 25
Number of params 41M # 27
Language Modelling enwik8 Transformer-XL (18 layers) Bit per Character (BPC) 1.03 # 23
Number of params 88M # 15
Language Modelling Hutter Prize 24-layer Transformer-XL Bit per Character (BPC) 0.99 # 4
Number of params 277M # 1
Language Modelling Hutter Prize 12-layer Transformer-XL Bit per Character (BPC) 1.06 # 8
Number of params 41M # 14
Language Modelling Hutter Prize 18-layer Transformer-XL Bit per Character (BPC) 1.03 # 7
Number of params 88M # 7
Language Modelling One Billion Word Transformer-XL Large PPL 21.8 # 3
Number of params 0.8B # 1
Language Modelling One Billion Word Transformer-XL Base PPL 23.5 # 6
Number of params 0.46B # 1
Language Modelling Penn Treebank (Word Level) Transformer-XL Validation perplexity 56.72 # 17
Test perplexity 54.55 # 22
Params 24M # 7
Language Modelling Text8 Transformer-XL - 24 layers Bit per Character (BPC) 1.08 # 5
Number of params 277M # 2
Language Modelling WikiText-103 Transformer-XL Large Validation perplexity 18.2 # 16
Test perplexity 18.3 # 33
Number of params 257M # 12
Language Modelling WikiText-103 Transformer-XL Standard Validation perplexity 23.1 # 24
Test perplexity 24.0 # 54
Number of params 151M # 29

Methods