Sentence-Pair Classification

17 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence

HSLCY/ABSA-BERT-pair NAACL 2019

Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA).

CLUE: A Chinese Language Understanding Evaluation Benchmark

CLUEbenchmark/CLUE COLING 2020

The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.

Glyce: Glyph-vectors for Chinese Character Representations

ShannonAI/glyce NeurIPS 2019

However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found.

CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark

cbluebenchmark/cblue ACL 2022

Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.

Continual and Multi-Task Architecture Search

ramakanth-pasunuru/CAS-MAS ACL 2019

Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task.

Elastic weight consolidation for better bias inoculation

j6mes/eacl2021-debias-finetuning EACL 2021

The biases present in training datasets have been shown to affect models for sentence pair classification tasks such as natural language inference (NLI) and fact verification.

Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach

yueyu1030/COSINE NAACL 2021

To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision.

Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models

lancopku/Embedding-Poisoning NAACL 2021

However, in this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples.

Neural semi-Markov CRF for Monolingual Word Alignment

chaojiang06/neural-Jacana ACL 2021

Monolingual word alignment is important for studying fine-grained editing operations (i. e., deletion, addition, and substitution) in text-to-text generation tasks, such as paraphrase generation, text simplification, neutralizing biased language, etc.