Longformer

Introduced by Beltagy et al. in Longformer: The Long-Document Transformer

Longformer is a modified Transformer architecture. Traditional Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this, Longformer uses an attention pattern that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. The attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention.

The attention patterns utilised include: sliding window attention, dilated sliding window attention and global + sliding window. These can be viewed in the components section of this page.

Source: Longformer: The Long-Document Transformer

Latest Papers

PAPER DATE
Longformer for MS MARCO Document Re-ranking Task
| Ivan SekulićAmir SoleimaniMohammad AliannejadiFabio Crestani
2020-09-20
Efficient Transformers: A Survey
Yi TayMostafa DehghaniDara BahriDonald Metzler
2020-09-14
Fine-Tune Longformer for Jointly Predicting Rumor Stance and Veracity
Anant Khandelwal
2020-07-15
Document Classification for COVID-19 Literature
Bernal Jiménez GutiérrezJuncheng ZengDongdong ZhangPing ZhangYu Su
2020-06-15
Longformer: The Long-Document Transformer
| Iz BeltagyMatthew E. PetersArman Cohan
2020-04-10

Tasks

Categories