Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Ranked #1 on Question Answering on CoQA
Commonsense reasoning is a long-standing challenge for deep learning.
Ranked #2 on Common Sense Reasoning on Winograd Schema Challenge
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
Ranked #1 on Sentiment Analysis on SST-2 Binary classification
COMMON SENSE REASONING COREFERENCE RESOLUTION DOCUMENT SUMMARIZATION LINGUISTIC ACCEPTABILITY MACHINE TRANSLATION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION TRANSFER LEARNING WORD SENSE DISAMBIGUATION
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
Ranked #1 on Language Modelling on enwik8 (using extra training data)
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
Ranked #1 on Sentence Completion on HellaSwag
COMMON SENSE REASONING COREFERENCE RESOLUTION DOMAIN ADAPTATION FEW-SHOT LEARNING LANGUAGE MODELLING MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SENTENCE COMPLETION UNSUPERVISED MACHINE TRANSLATION WORD SENSE DISAMBIGUATION
To solve the above problems, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation.
Ranked #6 on Click-Through Rate Prediction on Bing News
Recent models for unsupervised representation learning of text have employed a number of techniques to improve contextual word representations but have put little focus on discourse-level representations.
Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species.
Ranked #2 on Hand Gesture Recognition on Jester test
Commonsense reasoning aims to empower machines with the human ability to make presumptions about ordinary situations in our daily life.
Ranked #3 on Common Sense Reasoning on CommonsenseQA (using extra training data)