Data Poisoning
123 papers with code • 0 benchmarks • 0 datasets
Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).
Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
Benchmarks
These leaderboards are used to track progress in Data Poisoning
Libraries
Use these libraries to find Data Poisoning models and implementationsLatest papers with no code
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
In this paper, we extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors.
Purifying Large Language Models by Ensembling a Small Language Model
The emerging success of large language models (LLMs) heavily relies on collecting abundant training data from external (untrusted) sources.
SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms
We propose a novel energy-aware federated learning (FL)-based system, namely SusFL, for sustainable smart farming to address the challenge of inconsistent health monitoring due to fluctuating energy levels of solar sensors.
Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems
Recent studies have shown that recommender systems (RSs) are highly vulnerable to data poisoning attacks.
Security and Privacy Challenges of Large Language Models: A Survey
We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks for LLMs, and review the potential defense mechanisms.
Federated Learning with Dual Attention for Robust Modulation Classification under Attacks
To this end, we leverage attention mechanisms as a defense against attacks in FL and propose a robust FL algorithm by integrating the attention mechanisms into the global model aggregation step.
A GAN-based data poisoning framework against anomaly detection in vertical federated learning
Specifically, the malicious participant initially employs semi-supervised learning to train a surrogate target model.
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline
This study explores the vulnerabilities associated with copyright protection in DMs by introducing a backdoor data poisoning attack (SilentBadDiffusion) against text-to-image diffusion models.
Data-Dependent Stability Analysis of Adversarial Training
Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms.
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection
The extensive adoption of Self-supervised learning (SSL) has led to an increased security threat from backdoor attacks.