Inference Attack
86 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Inference Attack
Libraries
Use these libraries to find Inference Attack models and implementationsLatest papers
Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk
While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information.
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Machine learning (ML) is vulnerable to inference (e. g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset.
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment
To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance.
User Consented Federated Recommender System Against Personalized Attribute Inference Attack
However, the recommendation model learned by a common FedRec may still be vulnerable to private information leakage risks, particularly attribute inference attacks, which means that the attacker can easily infer users' personal attributes from the learned model.
DUCK: Distance-based Unlearning via Centroid Kinematics
Machine Unlearning is rising as a new field, driven by the pressing necessity of ensuring privacy in modern artificial intelligence models.
Addressing Membership Inference Attack in Federated Learning with Model Compression
In this paper, we show that the effectiveness of these attacks on the clients negatively correlates with the size of the client datasets and model complexity.
MIA-BAD: An Approach for Enhancing Membership Inference Attack and its Mitigation with Federated Learning
In this paper, we propose an enhanced Membership Inference Attack with the Batch-wise generated Attack Dataset (MIA-BAD), a modification to the MIA approach.
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models
Several membership inference attacks (MIAs) have been proposed to exhibit the privacy vulnerability of generative models by classifying a query image as a training dataset member or nonmember.
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML
These solutions, referred to as TEE-Shielded DNN Partition (TSDP), partition a DNN model into two parts, offloading the privacy-insensitive part to the GPU while shielding the privacy-sensitive part within the TEE.
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems
Our attack is versatile and can work in both white-box and black-box scenarios.