Reading Comprehension

569 papers with code • 7 benchmarks • 95 datasets

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Libraries

Use these libraries to find Reading Comprehension models and implementations
4 papers
1,108
2 papers
8,661
See all 6 libraries.

Most implemented papers

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Machine Comprehension Using Match-LSTM and Answer Pointer

shuohangwang/SeqMatchSeq 29 Aug 2016

We propose two ways of using Pointer Net for our task.

Stochastic Answer Networks for Machine Reading Comprehension

kevinduh/san_mrc ACL 2018

We propose a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension.

CoQA: A Conversational Question Answering Challenge

stanfordnlp/coqa-baselines TACL 2019

Humans gather information by engaging in conversations involving a series of interconnected questions and answers.

Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension

xycforgithub/MultiTask-MRC NAACL 2019

We propose a multi-task learning framework to learn a joint Machine Reading Comprehension (MRC) model that can be applied to a wide range of MRC tasks in different domains.

Stochastic Answer Networks for SQuAD 2.0

kevinduh/san_mrc 24 Sep 2018

This paper presents an extension of the Stochastic Answer Network (SAN), one of the state-of-the-art machine reading comprehension models, to be able to judge whether a question is unanswerable or not.

Long Short-Term Memory-Networks for Machine Reading

cheng6076/SNLI-attention EMNLP 2016

In this paper we address the question of how to render sequence-level networks better at handling structured input.

Gated-Attention Readers for Text Comprehension

bdhingra/ga-reader ACL 2017

In this paper we study the problem of answering cloze-style questions over documents.

Machine Comprehension by Text-to-Text Neural Question Generation

bloomsburyai/question-generation WS 2017

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers.

A Simple and Effective Model for Answering Multi-span Questions

eladsegal/tag-based-multi-span-extraction EMNLP 2020

Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for a model that generates text explicitly.