The goal of Paraphrase Identification is to determine whether a pair of sentences have the same meaning.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.
Ranked #1 on Semantic Textual Similarity on SentEval
In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks.
Ranked #1 on Paraphrase Identification on Quora Question Pairs
To accelerate inference and reduce model size while maintaining accuracy, we firstly propose a novel transformer distillation method that is a specially designed knowledge distillation (KD) method for transformer-based models.
Ranked #1 on Linguistic Acceptability on CoLA Dev
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.
Ranked #1 on Relation Extraction on FewRel
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text.
Ranked #1 on Question Answering on HotpotQA
In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks.
Ranked #1 on Paraphrase Identification on 2017_test set
Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference.
Most existing work on adversarial data generation focuses on English.