Reading Comprehension
568 papers with code • 7 benchmarks • 95 datasets
Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.
Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.
Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.
The Machine Reading group at UCL also provides an overview of reading comprehension tasks.
Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets
Libraries
Use these libraries to find Reading Comprehension models and implementationsSubtasks
- Machine Reading Comprehension
- Intent Recognition
- Implicit Relations
- LAMBADA
- LAMBADA
- Question Selection
- Multi-Hop Reading Comprehension
- Implicatures
- Logical Reasoning Reading Comprehension
- English Proverbs
- Fantasy Reasoning
- Figure Of Speech Detection
- Formal Fallacies Syllogisms Negation
- GRE Reading Comprehension
- Hyperbaton
- Movie Dialog Same Or Different
- Nonsense Words Grammar
- Phrase Relatedness
- RACE-h
- RACE-m
Latest papers with no code
PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering
Document Question Answering (QA) presents a challenge in understanding visually-rich documents (VRD), particularly those dominated by lengthy textual content like research journal articles.
emrQA-msquad: A Medical Dataset Structured with the SQuAD V2.0 Framework, Enriched with emrQA Medical Information
Machine Reading Comprehension (MRC) holds a pivotal role in shaping Medical Question Answering Systems (QAS) and transforming the landscape of accessing and applying medical information.
Question Difficulty Ranking for Multiple-Choice Reading Comprehension
Additionally, zero-shot comparative assessment is more effective at difficulty ranking than the absolute assessment and even the task transfer approaches at question difficulty ranking with a Spearman's correlation of 40. 4%.
Fewer Truncations Improve Language Modeling
In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens.
Automatic Generation and Evaluation of Reading Comprehension Test Items with Large Language Models
We then used this protocol and the dataset to evaluate the quality of items generated by Llama 2 and GPT-4.
CausalBench: A Comprehensive Benchmark for Causal Learning Capability of Large Language Models
To address these challenges, this paper proposes a comprehensive benchmark, namely CausalBench, to evaluate the causality understanding capabilities of LLMs.
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements
In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge.
XL$^2$Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies
However, prior benchmarks create datasets that ostensibly cater to long-text comprehension by expanding the input of traditional tasks, which falls short to exhibit the unique characteristics of long-text understanding, including long dependency tasks and longer text length compatible with modern LLMs' context window size.
The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models
Large Language Models (LLMs) have transformed the Natural Language Processing (NLP) landscape with their remarkable ability to understand and generate human-like text.
Explaining EDA synthesis errors with LLMs
Training new engineers in digital design is a challenge, particularly when it comes to teaching the complex electronic design automation (EDA) tooling used in this domain.