Paraphrase Identification

72 papers with code • 10 benchmarks • 17 datasets

The goal of Paraphrase Identification is to determine whether a pair of sentences have the same meaning.

Source: Adversarial Examples with Difficult Common Words for Paraphrase Identification

Image source: On Paraphrase Identification Corpora

Libraries

Use these libraries to find Paraphrase Identification models and implementations

Most implemented papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

google-research/bert NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

XLNet: Generalized Autoregressive Pretraining for Language Understanding

zihangdai/xlnet NeurIPS 2019

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

FNet: Mixing Tokens with Fourier Transforms

google-research/google-research NAACL 2022

At longer input lengths, our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs).

Bilateral Multi-Perspective Matching for Natural Language Sentences

google-research-datasets/paws 13 Feb 2017

Natural language sentence matching is a fundamental technology for a variety of tasks.

data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language

pytorch/fairseq Preprint 2022

While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind.

ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs

yinwenpeng/Answer_Selection TACL 2016

(ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.

Multi-Task Deep Neural Networks for Natural Language Understanding

namisan/mt-dnn ACL 2019

In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks.

TinyBERT: Distilling BERT for Natural Language Understanding

huawei-noah/Pretrained-Language-Model Findings of the Association for Computational Linguistics 2020

To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.

SpanBERT: Improving Pre-training by Representing and Predicting Spans

facebookresearch/SpanBERT TACL 2020

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text.