Question Answering
2883 papers with code • 130 benchmarks • 360 datasets
Question Answering is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context.
Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.
( Image credit: SQuAD )
Libraries
Use these libraries to find Question Answering models and implementationsDatasets
Subtasks
- Open-Ended Question Answering
- Open-Domain Question Answering
- Conversational Question Answering
- Answer Selection
- Answer Selection
- Knowledge Base Question Answering
- Community Question Answering
- Zero-Shot Video Question Answer
- Multiple Choice Question Answering (MCQA)
- Long Form Question Answering
- Science Question Answering
- Generative Question Answering
- Cross-Lingual Question Answering
- Mathematical Question Answering
- Temporal/Casual QA
- Logical Reasoning Question Answering
- Multilingual Machine Comprehension in English Hindi
- True or False Question Answering
- Question Quality Assessment
Latest papers
From Matching to Generation: A Survey on Generative Information Retrieval
We will summarize the advancements in GR regarding model training, document identifier, incremental learning, downstream tasks adaptation, multi-modal GR and generative recommendation, as well as progress in reliable response generation in aspects of internal knowledge memorization, external knowledge augmentation, generating response with citations and personal information assistant.
Simulating Task-Oriented Dialogues with State Transition Graphs and Large Language Models
In our experiments, using graph-guided response simulations leads to significant improvements in intent classification, slot filling and response relevance compared to naive single-prompt simulated conversations.
Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering
To simulate real-world scenarios and evaluate the ability of LLMs to integrate internal and external knowledge, in this paper, we propose leveraging LLMs for QA under Incomplete Knowledge Graph (IKGQA), where the given KG doesn't include all the factual triples involved in each question.
Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models
Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic.
Bias patterns in the application of LLMs for clinical decision support: A comprehensive study
Large Language Models (LLMs) have emerged as powerful candidates to inform clinical decision-making processes.
Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers
In this paper, we use \textit{diagnostic classifiers} to measure the extent to which the visual prompt produced by the resampler encodes spatial information.
Listen Then See: Video Alignment with Speaker Attention
Our approach exhibits an improved ability to leverage the video modality by using the audio modality as a bridge with the language modality.
MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering
Hence, to address this challenge, we also present a generic approach for translating SQuAD into any low-resource language.
ISQA: Informative Factuality Feedback for Scientific Summarization
We propose Iterative Facuality Refining on Informative Scientific Question-Answering (ISQA) feedback\footnote{Code is available at \url{https://github. com/lizekai-richard/isqa}}, a method following human learning theories that employs model-generated feedback consisting of both positive and negative information.
LaPA: Latent Prompt Assist Model For Medical Visual Question Answering
In this paper, we propose the Latent Prompt Assist model (LaPA) for medical visual question answering.