Answer Selection
47 papers with code • 6 benchmarks • 10 datasets
Answer Selection is the task of identifying the correct answer to a question from a pool of candidate answers. This task can be formulated as a classification or a ranking problem.
Source: Learning Analogy-Preserving Sentence Embeddings for Answer Selection
Most implemented papers
A Wrong Answer or a Wrong Question? An Intricate Relationship between Question Reformulation and Answer Selection in Conversational Question Answering
The dependency between an adequate question formulation and correct answer selection is a very intriguing but still underexplored area.
Utilizing Bidirectional Encoder Representations from Transformers for Answer Selection
We find that fine-tuning the BERT model for the answer selection task is very effective and observe a maximum improvement of 13. 1% in the QA datasets and 18. 7% in the CQA datasets compared to the previous state-of-the-art.
NUT-RC: Noisy User-generated Text-oriented Reading Comprehension
Most existing RC models are developed on formal datasets such as news articles and Wikipedia documents, which severely limit their performances when directly applied to the noisy and informal texts in social media.
ComQA:Compositional Question Answering via Hierarchical Graph Neural Networks
In compositional question answering, the systems should assemble several supporting evidence from the document to generate the final answer, which is more difficult than sentence-level or phrase-level QA.
[Re] Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
In addition to making the codebase more modular and easy to navigate, we have made changes to incorporate different transformers in the question embedding module.
CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues
This paper addresses the problem of dialogue reasoning with contextualized commonsense inference.
Solution of DeBERTaV3 on CommonsenseQA
We report the performance of DeBERTaV3 on CommonsenseQA in this report.
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks.
Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling
Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI).
Leveraging Large Language Models for Multiple Choice Question Answering
A more natural prompting approach is to present the question and answer options to the LLM jointly and have it output the symbol (e. g., "A") associated with its chosen answer option.