AdapterHub: A Framework for Adapting Transformers

15 Jul 2020Jonas PfeifferAndreas RückléClifton PothAishwarya KamathIvan VulićSebastian RuderKyunghyun ChoIryna Gurevych

The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of millions or billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper