GPT-3 is an autoregressive transformer model with 175 billion parameters. It uses the same architecture/model as GPT-2, including the modified initialization, pre-normalization, and reversible tokenization, with the exception that GPT-3 uses alternating dense and locally banded sparse attention patterns in the layers of the transformer, similar to the Sparse Transformer.
Source: Language Models are Few-Shot LearnersPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 85 | 11.04% |
Large Language Model | 56 | 7.27% |
Question Answering | 48 | 6.23% |
Prompt Engineering | 31 | 4.03% |
Retrieval | 29 | 3.77% |
In-Context Learning | 25 | 3.25% |
Code Generation | 24 | 3.12% |
Sentence | 23 | 2.99% |
Text Generation | 19 | 2.47% |