Reconstruction Attack

13 papers with code • 0 benchmarks • 0 datasets

Facial reconstruction attack of facial manipulation models such as: Face swapping models, anonymization models, etc.

Most implemented papers

Reconstructing Training Data with Informed Adversaries

deepmind/informed_adversary_mnist_reconstruction 13 Jan 2022

Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.

A Review of Anonymization for Healthcare Data

iyempissy/anonymization-reconstruction-attack 13 Apr 2021

Mining health data can lead to faster medical decisions, improvement in the quality of treatment, disease prevention, reduced cost, and it drives innovative solutions within the healthcare sector.

Inference Attacks Against Graph Neural Networks

zhangzhk0819/gnn-embedding-leaks 6 Oct 2021

Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.

When the Curious Abandon Honesty: Federated Learning Is Not Private

JonasGeiping/breaching 6 Dec 2021

Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.

How Private Is Your RL Policy? An Inverse RL Based Analysis Framework

magnetar-iiith/pril 10 Dec 2021

Reinforcement Learning (RL) enables agents to learn how to perform various tasks from scratch.

TabLeak: Tabular Data Leakage in Federated Learning

eth-sri/tableak 4 Oct 2022

A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.

Confidence-Ranked Reconstruction of Census Microdata from Published Statistics

terranceliu/rap-rank-reconstruction 6 Nov 2022

Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.

Vicious Classifiers: Data Reconstruction Attack at Inference Time

mmalekzadeh/vicious-classifiers 8 Dec 2022

Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input, for a target task, and only share the model's outputs with the server.

Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation

Guang000/Awesome-Dataset-Distillation 2 Feb 2023

We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset, and that these reconstruction attacks can be used for \textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.

LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation

Manishpandey-0/Adversarial-reconstruction-attack-on-FL-using-LOKI 21 Mar 2023

When both FedAVG and secure aggregation are used, there is no current method that is able to attack multiple clients concurrently in a federated learning setting.