Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers

5 Jun 2020Krzysztof ChoromanskiValerii LikhosherstovDavid DohanXingyou SongAndreea GaneTamas SarlosPeter HawkinsJared DavisDavid BelangerLucy ColwellAdrian Weller

Transformer models have achieved state-of-the-art results across a diverse range of domains. However, concern over the cost of training the attention mechanism to learn complex dependencies between distant inputs continues to grow... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper