Linear Warmup With Linear Decay is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Retrieval | 116 | 12.21% |
Language Modelling | 109 | 11.47% |
Question Answering | 60 | 6.32% |
Large Language Model | 39 | 4.11% |
Sentence | 34 | 3.58% |
Sentiment Analysis | 33 | 3.47% |
Text Classification | 30 | 3.16% |
Information Retrieval | 22 | 2.32% |
Text Generation | 19 | 2.00% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |