Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
Ranked #1 on Open-Domain Question Answering on SQuAD1.1
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
Ranked #1 on Chinese Sentiment Analysis on ChnSentiCorp Dev
CHINESE NAMED ENTITY RECOGNITION CHINESE READING COMPREHENSION CHINESE SENTENCE PAIR CLASSIFICATION CHINESE SENTIMENT ANALYSIS LINGUISTIC ACCEPTABILITY MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE OPEN-DOMAIN QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query.
Ranked #4 on Question Answering on MS MARCO
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method.
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text.
Ranked #1 on Question Answering on HotpotQA
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering.