Chain-of-thought prompts contain a series of intermediate reasoning steps, and they are shown to significantly improve the ability of large language models to perform certain tasks that involve complex reasoning (e.g., arithmetic, commonsense reasoning, symbolic reasoning, etc.)
Source: Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 5 | 9.62% |
Question Answering | 4 | 7.69% |
In-Context Learning | 3 | 5.77% |
Sentence | 3 | 5.77% |
Arithmetic Reasoning | 3 | 5.77% |
Sentiment Analysis | 2 | 3.85% |
Stance Detection | 2 | 3.85% |
Prompt Engineering | 2 | 3.85% |
Multi-hop Question Answering | 2 | 3.85% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |