Reading Comprehension
569 papers with code • 7 benchmarks • 95 datasets
Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.
Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.
Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.
The Machine Reading group at UCL also provides an overview of reading comprehension tasks.
Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets
Libraries
Use these libraries to find Reading Comprehension models and implementationsSubtasks
- Machine Reading Comprehension
- Intent Recognition
- Implicit Relations
- LAMBADA
- LAMBADA
- Question Selection
- Multi-Hop Reading Comprehension
- Implicatures
- Logical Reasoning Reading Comprehension
- English Proverbs
- Fantasy Reasoning
- Figure Of Speech Detection
- Formal Fallacies Syllogisms Negation
- GRE Reading Comprehension
- Hyperbaton
- Movie Dialog Same Or Different
- Nonsense Words Grammar
- Phrase Relatedness
- RACE-h
- RACE-m
Latest papers with no code
CausalBench: A Comprehensive Benchmark for Causal Learning Capability of Large Language Models
To address these challenges, this paper proposes a comprehensive benchmark, namely CausalBench, to evaluate the causality understanding capabilities of LLMs.
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements
In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge.
XL$^2$Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies
However, prior benchmarks create datasets that ostensibly cater to long-text comprehension by expanding the input of traditional tasks, which falls short to exhibit the unique characteristics of long-text understanding, including long dependency tasks and longer text length compatible with modern LLMs' context window size.
The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models
Large Language Models (LLMs) have transformed the Natural Language Processing (NLP) landscape with their remarkable ability to understand and generate human-like text.
Explaining EDA synthesis errors with LLMs
Training new engineers in digital design is a challenge, particularly when it comes to teaching the complex electronic design automation (EDA) tooling used in this domain.
PMG : Personalized Multimodal Generation with Large Language Models
Such user preferences are then fed into a generator, such as a multimodal LLM or diffusion model, to produce personalized content.
Exploring Autonomous Agents through the Lens of Large Language Models: A Review
Large Language Models (LLMs) are transforming artificial intelligence, enabling autonomous agents to perform diverse tasks across various domains.
The Death of Feature Engineering? BERT with Linguistic Features on SQuAD 2.0
We conclude that the BERT base model will be improved by incorporating the features.
Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey
With the advancement of Artificial Intelligence (AI) and Large Language Models (LLMs), there is a profound transformation occurring in the realm of natural language processing tasks within the legal domain.
Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents
This approach aims to generate relation representations that are more aware of the spatial context and unseen relation in a manner similar to human perception.