Pay Less Attention with Lightweight and Dynamic Convolutions

Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Document Summarization CNN / Daily Mail LightConv ROUGE-1 39.52 # 22
ROUGE-2 15.97 # 22
ROUGE-L 36.51 # 21
Abstractive Text Summarization CNN / Daily Mail Dynamic Conv ROUGE-1 39.84 # 45
ROUGE-2 16.25 # 49
ROUGE-L 36.73 # 43
Document Summarization CNN / Daily Mail DynamicConv ROUGE-1 39.84 # 21
ROUGE-2 16.25 # 20
ROUGE-L 36.73 # 19
Machine Translation IWSLT2014 German-English LightConv BLEU score 34.8 # 25
Machine Translation IWSLT2014 German-English DynamicConv BLEU score 35.2 # 23
Language Modelling One Billion Word DynamicConv PPL 26.67 # 13
Number of params 0.34B # 1
Machine Translation WMT2014 English-French DynamicConv BLEU score 43.2 # 12
Machine Translation WMT2014 English-French LightConv BLEU score 43.1 # 15
Machine Translation WMT2014 English-German LightConv BLEU score 28.9 # 36
Number of Params 202M # 9
Machine Translation WMT2014 English-German DynamicConv BLEU score 29.7 # 18
Number of Params 213M # 6
Machine Translation WMT 2017 English-Chinese LightConv BLEU score 24.3 # 2
Machine Translation WMT 2017 English-Chinese DynamicConv BLEU score 24.4 # 1

Methods