StrategyQA

13 papers with code • 0 benchmarks • 0 datasets

StrategyQA aims to measure the ability of models to answer questions that require multi-step implicit reasoning.

Source: BIG-bench

Most implemented papers

Escape Sky-high Cost: Early-stopping Self-Consistency for Multi-step Reasoning

yiwei98/esc 19 Jan 2024

Self-consistency (SC) has been a widely used decoding strategy for chain-of-thought reasoning.

Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation

pphuc25/distil-cd 21 Feb 2024

We propose a straightforward approach called Distillation Contrastive Decoding (DCD) to enhance the reasoning capabilities of Large Language Models (LLMs) during inference.

CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring Commonsense Reasoning and Long-Tail Knowledge

d3mlab/cr-lt-kgqa 3 Mar 2024

In this work, we seek a novel KGQA dataset that supports commonsense reasoning and focuses on long-tail entities (e. g., non-mainstream and recent entities) where LLMs frequently hallucinate, and thus create the need for novel methodologies that leverage the KG for factual and attributable commonsense inference.