Pay Attention when Required

9 Sep 2020  ·  Swetha Mandava, Szymon Migacz, Alex Fit Florea ·

Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling enwiki8 PAR Transformer 24B Bit per Character (BPC) 1.11 # 1
Sentiment Analysis SST-2 Binary classification PAR BERT Base Accuracy 91.6 # 50
Language Modelling Text8 PAR Transformer 24B Bit per Character (BPC) 1.18 # 13
Language Modelling WikiText-103 PAR Transformer Large Test perplexity 18.4 # 35
Language Modelling WikiText-103 PAR Transformer Base Test perplexity 22.7 # 48

Methods