The goal of Question Generation is to generate a valid and fluent question according to a given passage and the target answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chatbots to lead a conversation.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
Ranked #1 on Text Summarization on GigaWord-10k (using extra training data)
We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).
Ranked #3 on Question Generation on SQuAD1.1 (using extra training data)
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
Ranked #2 on Generative Question Answering on CoQA (using extra training data)
We study automatic question generation for sentences from text passages in reading comprehension.
We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency.
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism.
Ranked #1 on Text Summarization on GigaWord (using extra training data)
Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation.
Ranked #8 on Question Generation on SQuAD1.1