Open-Domain Question Answering
195 papers with code • 15 benchmarks • 26 datasets
Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.
Libraries
Use these libraries to find Open-Domain Question Answering models and implementationsLatest papers with no code
Is Table Retrieval a Solved Problem? Join-Aware Multi-Table Retrieval
Retrieving relevant tables containing the necessary information to accurately answer a given question over tables is critical to open-domain question-answering (QA) systems.
Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization
In addition, it is still unclear how well an OpenQA model can transfer to completely new knowledge domains.
Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts
With our method, the origin language models can cover several times longer contexts while keeping the computing requirements close to the baseline.
FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
Simply concatenating all the retrieved documents brings large amounts of unnecessary tokens for LLMs, which degenerates the efficiency of black-box RAG.
Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering
Finally, based on these observations, we propose a method to mitigate overfitting to specific context quality by introducing bias to the cross-attention distribution, which we demonstrate to be effective in improving the performance of FiD models on different context quality.
Harnessing Multi-Role Capabilities of Large Language Models for Open-Domain Question Answering
Open-domain question answering (ODQA) has emerged as a pivotal research spotlight in information systems.
To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering
Medical open-domain question answering demands substantial access to specialized knowledge.
Answerability in Retrieval-Augmented Open-Domain Question Answering
To address this limitation, we discovered an efficient approach for training models to recognize such excerpts.
Automatic Question-Answer Generation for Long-Tail Knowledge
Pretrained Large Language Models (LLMs) have gained significant attention for addressing open-domain Question Answering (QA).
Reasoning in Conversation: Solving Subjective Tasks through Dialogue Simulation for Large Language Models
Based on the characteristics of the tasks and the strong dialogue-generation capabilities of LLMs, we propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.