Well begun is half done: Importance of Starting Right in Multi-Step Math Reasoning

14 Nov 2023  ·  Kushal Jain, Niket Tandon, Kumar Shridhar ·

Smaller language models can solve complex reasoning tasks better by learning to generate rationales for their predictions. However, we observe that these smaller models can sometimes struggle to start correctly, but when corrected, can solve a task that they would otherwise have struggled with. We propose two ways in which a smaller model can benefit from initial guidance: 1) asking an LLM for initial guidance, and 2) self-questioning guidance, where the student model can first initiate a question regarding how to start and then continue that chain. We extend initial question-based guidance to a prompting technique called QuestCoT, where starting with a question before a chain of reasoning proves useful. On two multi-step math reasoning datasets GSM8K and SVAMP, we show that starting correctly can lead to a significant performance gain (up to $+14$ points with LLM guidance and $+6$ points with QuestCoT).

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here