Distractor Generation

11 papers with code • 1 benchmarks • 2 datasets

Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ).

Datasets


Latest papers with no code

Distractor Generation for Multiple-Choice Questions: A Survey of Methods, Datasets, and Evaluation

no code yet • 2 Feb 2024

Distractors are important in learning evaluation.

BRAINTEASER: Lateral Thinking Puzzles for Large Language Models

no code yet • 8 Oct 2023

The success of language models has inspired the NLP community to attend to tasks that require implicit and complex reasoning, relying on human-like commonsense mechanisms.

DISTO: Evaluating Textual Distractors for Multi-Choice Questions using Negative Sampling based Approach

no code yet • 10 Apr 2023

At the same time, DISTO ranks the performance of state-of-the-art DG models very differently from MT-based metrics, showing that MT metrics should not be used for distractor evaluation.

Automatic Distractor Generation for Multiple Choice Questions in Standard Tests

no code yet • COLING 2020

To assess the knowledge proficiency of a learner, multiple choice question is an efficient and widespread form in standard tests.

Better Distractions: Transformer-based Distractor Generation and Multiple Choice Question Filtering

no code yet • 19 Oct 2020

In this work, we train a GPT-2 language model to generate three distractors for a given question and text context, using the RACE dataset.

Knowledge-Driven Distractor Generation for Cloze-style Multiple Choice Questions

no code yet • 21 Apr 2020

In this paper, we propose a novel configurable framework to automatically generate distractive choices for open-domain cloze-style multiple-choice questions, which incorporates a general-purpose knowledge base to effectively create a small distractor candidate set, and a feature-rich learning-to-rank model to select distractors that are both plausible and reliable.

Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension

no code yet • 20 Nov 2019

Second, they didn't emphasize the relationship between the distractor and article, making the generated distractors not semantically relevant to the article and thus fail to form a set of meaningful options.

Good, Better, Best: Textual Distractors Generation for Multiple-Choice Visual Question Answering via Reinforcement Learning

no code yet • 21 Oct 2019

Multiple-choice VQA has drawn increasing attention from researchers and end-users recently.

Equipping Educational Applications with Domain Knowledge

no code yet • WS 2019

One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e. g., history or science).