Reading Comprehension

568 papers with code • 7 benchmarks • 95 datasets

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Libraries

Use these libraries to find Reading Comprehension models and implementations
4 papers
1,101
2 papers
8,533
See all 6 libraries.

Most implemented papers

Know What You Don't Know: Unanswerable Questions for SQuAD

worksheets/0x9a15a170 ACL 2018

Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context.

Teaching Machines to Read and Comprehend

deepmind/rc-data NeurIPS 2015

Teaching machines to read natural language documents remains an elusive challenge.

Reading Wikipedia to Answer Open-Domain Questions

facebookresearch/DrQA ACL 2017

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism

NVIDIA/Megatron-LM 17 Sep 2019

To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8. 3 billion parameter transformer language model similar to GPT-2 and a 3. 9 billion parameter model similar to BERT.

Learning to Ask: Neural Question Generation for Reading Comprehension

xinyadu/nqg ACL 2017

We study automatic question generation for sentences from text passages in reading comprehension.

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

microsoft/DeBERTa ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

A Unified MRC Framework for Named Entity Recognition

ShannonAI/mrc-for-flat-nested-ner ACL 2020

Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task.

Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

huggingface/transformers 10 Nov 2019

We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.

mT5: A massively multilingual pre-trained text-to-text transformer

google-research/multilingual-t5 NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering

Microsoft/SDNet 10 Dec 2018

Conversational question answering (CQA) is a novel QA task that requires understanding of dialogue context.