Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method.
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.
Ranked #2 on Question Answering on Quasart-T
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
Ranked #1 on Open-Domain Question Answering on SQuAD1.1
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
Ranked #1 on Open-Domain Question Answering on DuReader
CHINESE NAMED ENTITY RECOGNITION CHINESE READING COMPREHENSION CHINESE SENTENCE PAIR CLASSIFICATION CHINESE SENTIMENT ANALYSIS LINGUISTIC ACCEPTABILITY MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE OPEN-DOMAIN QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query.
Ranked #4 on Question Answering on CNN / Daily Mail
Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length.