Question-Answer-Generation
10 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Question-Answer-Generation
Libraries
Use these libraries to find Question-Answer-Generation models and implementationsLatest papers
DCQA: Document-Level Chart Question Answering towards Complex Reasoning and Common-Sense Understanding
Our DCQA dataset is expected to foster research on understanding visualizations in documents, especially for scenarios that require complex reasoning for charts in the visually-rich document.
Improving Low-Resource Question Answering using Active Learning in Multiple Stages
Furthermore, they often yield very good performance but only in the domain they were trained on.
TAG: Boosting Text-VQA via Text-aware Visual Question-answer Generation
To address this deficiency, we develop a new method to generate high-quality and diverse QA pairs by explicitly utilizing the existing rich text available in the scene context of each image.
MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding
Specifically, the task involves multi-hop questions that require reasoning over image-caption pairs to identify the grounded visual object being referred to and then predicting a span from the news body text to answer the question.
Change Detection Meets Visual Question Answering
In order to provide every user with flexible access to change information and help them better understand land-cover changes, we introduce a novel task: change detection-based visual question answering (CDVQA) on multi-temporal aerial images.
It is AI's Turn to Ask Humans a Question: Question-Answer Pair Generation for Children's Story Books
Existing question answering (QA) techniques are created mainly to answer questions asked by humans.
Quiz-Style Question Generation for News Stories
As a first step towards measuring news informedness at a scale, we study the problem of quiz-style multiple-choice question generation, which may be used to survey users about their knowledge of recent news.
End-to-End Video Question-Answer Generation with Generator-Pretester Network
Furthermore, using our generated QA pairs only on the Video QA task, we can surpass some supervised baselines.
Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs
We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models.
Asking Questions the Human Way: Scalable Question-Answer Generation from Text Corpus
In this paper, we propose Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions.