TriviaQA

44 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find TriviaQA models and implementations

Datasets


Most implemented papers

Longformer: The Long-Document Transformer

allenai/longformer 10 Apr 2020

To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.

Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

huggingface/transformers 10 Nov 2019

We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.

Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering

princeton-nlp/DensePhrases EACL 2021

Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge.

Relevance-guided Supervision for OpenQA with ColBERT

stanford-futuredata/ColBERT 1 Jul 2020

In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages.

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

mandarjoshi90/triviaqa ACL 2017

We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples.

End-to-End Training of Neural Retrievers for Open-Domain Question Answering

NVIDIA/Megatron-LM ACL 2021

We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Simple and Effective Multi-Paragraph Reading Comprehension

allenai/document-qa ACL 2018

We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input.

Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering

shuohangwang/mprc ICLR 2018

We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer.

A Question-Focused Multi-Factor Attention Network for Question Answering

nusnlp/amanda 25 Jan 2018

Neural network models recently proposed for question answering (QA) primarily focus on capturing the passage-question relation.