Reconstruction Attack
13 papers with code • 0 benchmarks • 0 datasets
Facial reconstruction attack of facial manipulation models such as: Face swapping models, anonymization models, etc.
Benchmarks
These leaderboards are used to track progress in Reconstruction Attack
Most implemented papers
Reconstructing Training Data with Informed Adversaries
Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.
A Review of Anonymization for Healthcare Data
Mining health data can lead to faster medical decisions, improvement in the quality of treatment, disease prevention, reduced cost, and it drives innovative solutions within the healthcare sector.
Inference Attacks Against Graph Neural Networks
Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.
When the Curious Abandon Honesty: Federated Learning Is Not Private
Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.
How Private Is Your RL Policy? An Inverse RL Based Analysis Framework
Reinforcement Learning (RL) enables agents to learn how to perform various tasks from scratch.
TabLeak: Tabular Data Leakage in Federated Learning
A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.
Confidence-Ranked Reconstruction of Census Microdata from Published Statistics
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.
Vicious Classifiers: Data Reconstruction Attack at Inference Time
Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input, for a target task, and only share the model's outputs with the server.
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset, and that these reconstruction attacks can be used for \textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation
When both FedAVG and secure aggregation are used, there is no current method that is able to attack multiple clients concurrently in a federated learning setting.