Dilated Sliding Window Attention

Introduced by Beltagy et al. in Longformer: The Long-Document Transformer

Dilated Sliding Window Attention is an attention pattern for attention-based models. It was proposed as part of the Longformer architecture. It is motivated by the fact that non-sparse attention in the original Transformer formulation has a self-attention component with $O\left(n^{2}\right)$ time and memory complexity where $n$ is the input sequence length and thus, is not efficient to scale to long inputs.

Compared to a Sliding Window Attention pattern, we can further increase the receptive field without increasing computation by making the sliding window "dilated". This is analogous to dilated CNNs where the window has gaps of size dilation $d$. Assuming a fixed $d$ and $w$ for all layers, the receptive field is $l × d × w$, which can reach tens of thousands of tokens even for small values of $d$.

Source: Longformer: The Long-Document Transformer

Latest Papers

PAPER DATE
Longformer for MS MARCO Document Re-ranking Task
| Ivan SekulićAmir SoleimaniMohammad AliannejadiFabio Crestani
2020-09-20
Efficient Transformers: A Survey
Yi TayMostafa DehghaniDara BahriDonald Metzler
2020-09-14
Fine-Tune Longformer for Jointly Predicting Rumor Stance and Veracity
Anant Khandelwal
2020-07-15
Document Classification for COVID-19 Literature
Bernal Jiménez GutiérrezJuncheng ZengDongdong ZhangPing ZhangYu Su
2020-06-15
Longformer: The Long-Document Transformer
| Iz BeltagyMatthew E. PetersArman Cohan
2020-04-10

Tasks

Components

COMPONENT TYPE
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories