SST-2

20 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Leveraging QA Datasets to Improve Generative Data Augmentation

dheeraj7596/conda 25 May 2022

The ability of generative language models (GLMs) to generate text has improved considerably in the last few years, enabling their use for generative data augmentation.

Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

ruiqi-zhong/acl2021-instance-level Findings (ACL) 2021

We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%.

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

google-research/google-research EMNLP 2021

Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available.

General Cross-Architecture Distillation of Pretrained Language Models into Matrix Embeddings

lgalke/cross-architecture-distillation 17 Sep 2021

We match or exceed the scores of ELMo for all tasks of the GLUE benchmark except for the sentiment analysis task SST-2 and the linguistic acceptability task CoLA.

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

yumeng5/supergen 9 Feb 2022

Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks: Unidirectional PLMs (e. g., GPT) are well known for their superior text generation capabilities; bidirectional PLMs (e. g., BERT) have been the prominent choice for natural language understanding (NLU) tasks.

A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis

salesforce/fewshot_absa Findings (NAACL) 2022

Our evaluation results on the single-task polarity prediction show that our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

zhangcen456/ib Findings (ACL) 2022

Existing studies have demonstrated that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive, but can be easily manipulated by adversaries to fool NLP models.

ELECTRA is a Zero-Shot Learner, Too

nishiwen1214/rtd-electra 17 Jul 2022

Numerically, compared to MLM-RoBERTa-large and MLM-BERT-large, our RTD-ELECTRA-large has an average of about 8. 4% and 13. 7% improvement on all 15 tasks.

RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language Understanding

DLYuanGod/RPN 12 Dec 2022

However, existing data augmentation techniques in natural language understanding (NLU) may not fully capture the complexity of natural language variations, and they can be challenging to apply to large datasets.