Question Generation
223 papers with code • 8 benchmarks • 23 datasets
The goal of Question Generation is to generate a valid and fluent question according to a given passage and the target answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chatbots to lead a conversation.
Libraries
Use these libraries to find Question Generation models and implementationsMost implemented papers
Learning Dense Representations of Phrases at Scale
Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019).
Exploring Models and Data for Image Question Answering
A suite of baseline results on this new dataset are also presented.
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
The T5 model and its unified text-to-text paradigm contributed in advancing the state-of-the-art for many natural language processing tasks.
Generating Natural Questions About an Image
There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images.
Multi-hop Reading Comprehension through Question Decomposition and Rescoring
Multi-hop Reading Comprehension (RC) requires reasoning and aggregation across several paragraphs.
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering
Second, since the traditional evaluation metrics (e. g., BLEU) often fall short in evaluating the quality of generated questions, we propose a QA-based evaluation method which measures the QG model's ability to mimic human annotators in generating QA training data.
Asking Questions the Human Way: Scalable Question-Answer Generation from Text Corpus
In this paper, we propose Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions.
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.
CliniQG4QA: Generating Diverse Questions for Domain Adaptation of Clinical Question Answering
Clinical question answering (QA) aims to automatically answer questions from medical professionals based on clinical texts.