SST-2

21 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

ScaleFL: Resource-Adaptive Federated Learning With Heterogeneous Clients

git-disl/scale-fl CVPR 2023

In most FL approaches, all edge clients are assumed to have sufficient computation capabilities to participate in the learning of a deep neural network (DNN) model.

Adaptive Deep Neural Network Inference Optimization with EENet

git-disl/eenet 15 Jan 2023

Instead of having every sample go through all DNN layers during prediction, EENet learns an early exit scheduler, which can intelligently terminate the inference earlier for certain predictions, which the model has high confidence of early exit.

TrojText: Test-time Invisible Textual Trojan Insertion

ucf-ml-research/trojtext 3 Mar 2023

This paper proposes a solution called TrojText, which aims to determine whether invisible textual Trojan attacks can be performed more efficiently and cost-effectively without training data.

Masked Language Model Based Textual Adversarial Example Detection

mlmddetection/mlmddetection 18 Apr 2023

To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model.

Text Classification via Large Language Models

shannonai/gpt-cls-carp 15 May 2023

This is due to (1) the lack of reasoning ability in addressing complex linguistic phenomena (e. g., intensification, contrast, irony etc); (2) limited number of tokens allowed in in-context learning.

From Shortcuts to Triggers: Backdoor Defense with Denoised PoE

luka-group/dpoe 24 May 2023

Language models are often at risk of diverse backdoor attacks, especially data poisoning.

OrderBkd: Textual backdoor attack through repositioning

alekseevskaia/orderbkd 12 Feb 2024

The use of third-party datasets and pre-trained machine learning models poses a threat to NLP systems due to possibility of hidden backdoor attacks.

When does word order matter and when doesn't it?

xdchen2/order 29 Feb 2024

Our results show the effect that the less informative word order is, the more consistent the model's predictions are between unscrambled and scrambled sentences.

LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement

squeezeailab/llm2llm 22 Mar 2024

LLM2LLM (1) fine-tunes a baseline student LLM on the initial seed data, (2) evaluates and extracts data points that the model gets wrong, and (3) uses a teacher LLM to generate synthetic data based on these incorrect data points, which are then added back into the training data.

Revisiting character-level adversarial attacks

lions-epfl/charmer 7 May 2024

Adversarial attacks in Natural Language Processing apply perturbations in the character or token levels.