Distractor Generation
11 papers with code • 1 benchmarks • 2 datasets
Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ).
Latest papers with no code
Distractor Generation for Multiple-Choice Questions: A Survey of Methods, Datasets, and Evaluation
Distractors are important in learning evaluation.
BRAINTEASER: Lateral Thinking Puzzles for Large Language Models
The success of language models has inspired the NLP community to attend to tasks that require implicit and complex reasoning, relying on human-like commonsense mechanisms.
DISTO: Evaluating Textual Distractors for Multi-Choice Questions using Negative Sampling based Approach
At the same time, DISTO ranks the performance of state-of-the-art DG models very differently from MT-based metrics, showing that MT metrics should not be used for distractor evaluation.
Automatic Distractor Generation for Multiple Choice Questions in Standard Tests
To assess the knowledge proficiency of a learner, multiple choice question is an efficient and widespread form in standard tests.
Better Distractions: Transformer-based Distractor Generation and Multiple Choice Question Filtering
In this work, we train a GPT-2 language model to generate three distractors for a given question and text context, using the RACE dataset.
Knowledge-Driven Distractor Generation for Cloze-style Multiple Choice Questions
In this paper, we propose a novel configurable framework to automatically generate distractive choices for open-domain cloze-style multiple-choice questions, which incorporates a general-purpose knowledge base to effectively create a small distractor candidate set, and a feature-rich learning-to-rank model to select distractors that are both plausible and reliable.
Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension
Second, they didn't emphasize the relationship between the distractor and article, making the generated distractors not semantically relevant to the article and thus fail to form a set of meaningful options.
Good, Better, Best: Textual Distractors Generation for Multiple-Choice Visual Question Answering via Reinforcement Learning
Multiple-choice VQA has drawn increasing attention from researchers and end-users recently.
Equipping Educational Applications with Domain Knowledge
One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e. g., history or science).