Answer Selection

47 papers with code • 6 benchmarks • 10 datasets

Answer Selection is the task of identifying the correct answer to a question from a pool of candidate answers. This task can be formulated as a classification or a ranking problem.

Source: Learning Analogy-Preserving Sentence Embeddings for Answer Selection

Latest papers with no code

When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards

no code yet • 1 Feb 2024

Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection.

Enhancing Answer Selection in Community Question Answering with Pre-trained and Large Language Models

no code yet • 29 Nov 2023

Moreover, we use the LLM to generate external knowledge from questions and correct answers to achieve knowledge augmentation for the answer selection task by LLM, while optimizing the prompt of LLM in different aspects.

Evaluating LLMs on Document-Based QA: Exact Answer Selection and Numerical Extraction using Cogtale dataset

no code yet • 14 Nov 2023

In this paper, we specifically focus on this underexplored context and conduct empirical analysis of LLMs (GPT-4 and GPT-3. 5) on question types, including single-choice, yes-no, multiple-choice, and number extraction questions from documents in zero-shot setting.

Improving Zero-shot Reader by Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering

no code yet • 26 Oct 2023

Large language models (LLMs) enable zero-shot approaches in open-domain question answering (ODQA), yet with limited advancements as the reader is compared to the retriever.

SQUARE: Automatic Question Answering Evaluation using Multiple Positive and Negative References

no code yet • 21 Sep 2023

Evaluation of QA systems is very challenging and expensive, with the most reliable approach being human annotations of correctness of answers for questions.

Intent-calibrated Self-training for Answer Selection in Open-domain Dialogues

no code yet • 13 Jul 2023

Specifically, it improves 2. 06% and 1. 00% of F1 score on the two datasets, compared with the strongest baseline with only 5% labeled data.

Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge

no code yet • 30 May 2023

The open-ended Visual Question Answering (VQA) task requires AI models to jointly reason over visual and natural language inputs using world knowledge.

Getting MoRE out of Mixture of Language Model Reasoning Experts

no code yet • 24 May 2023

Beyond generalizability, the interpretable design of MoRE improves selective question answering results compared to baselines without incorporating inter-expert agreement.

RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm

no code yet • 7 Jan 2023

We initialize the policy weights with the improved ABC algorithm.

Exploiting Hybrid Semantics of Relation Paths for Multi-hop Question Answering Over Knowledge Graphs

no code yet • COLING 2022

Answering natural language questions on knowledge graphs (KGQA) remains a great challenge in terms of understanding complex questions via multi-hop reasoning.