Machine Reading Comprehension
197 papers with code • 4 benchmarks • 41 datasets
Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.
Libraries
Use these libraries to find Machine Reading Comprehension models and implementationsLatest papers
Interpreting Themes from Educational Stories
Reading comprehension continues to be a crucial research focus in the NLP community.
ArabicaQA: A Comprehensive Dataset for Arabic Question Answering
In conclusion, ArabicaQA, AraDPR, and the benchmarking of LLMs in Arabic question answering offer significant advancements in the field of Arabic NLP.
ChroniclingAmericaQA: A Large-scale Question Answering Dataset based on Historical American Newspaper Pages
Therefore, to enable realistic testing of QA models, our dataset can be used in three different ways: answering questions from raw and noisy content, answering questions from cleaner, corrected version of the content, as well as answering questions from scanned images of newspaper pages.
WangchanLion and WangchanX MRC Eval
Our model is based on SEA-LION and a collection of instruction following datasets.
VlogQA: Task, Dataset, and Baseline Models for Vietnamese Spoken-Based Machine Reading Comprehension
This paper presents the development process of a Vietnamese spoken language corpus for machine reading comprehension (MRC) tasks and provides insights into the challenges and opportunities associated with using real-world data for machine reading comprehension tasks.
Towards Robust Text Retrieval with Progressive Learning
However, existing embedding models for text retrieval usually have three non-negligible limitations.
Mirror: A Universal Framework for Various Information Extraction Tasks
Sharing knowledge between information extraction tasks has always been a challenge due to the diverse data formats and task variations.
MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading Comprehension
The large language models have achieved superior performance on various natural language tasks.
Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading Comprehension Shortcut Triggers
Using GPT4 as the editor, we find it can successfully edit trigger shortcut in samples that fool LLMs.
Explaining Interactions Between Text Spans
Reasoning over spans of tokens from different parts of the input is essential for natural language understanding (NLU) tasks such as fact-checking (FC), machine reading comprehension (MRC) or natural language inference (NLI).