Privacy Preserving Deep Learning
26 papers with code • 0 benchmarks • 3 datasets
The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).
Benchmarks
These leaderboards are used to track progress in Privacy Preserving Deep Learning
Most implemented papers
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)
The introduced DP should help limit leakage threats posed by MIAs, and our practical analysis is the first to test this hypothesis on the COVID-19 classification task.
Collaborative Training of Medical Artificial Intelligence Models with non-uniform Labels
Due to the rapid advancements in recent years, medical image analysis is largely dominated by deep learning (DL).
Memorization of Named Entities in Fine-tuned BERT Models
One such risk is training data extraction from language models that have been trained on datasets, which contain personal and privacy sensitive information.
Split Without a Leak: Reducing Privacy Leakage in Split Learning
The idea behind it is that the client encrypts the activation map (the output of the split layer between the client and the server) before sending it to the server.
Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI Models
So far, the impact of training strategy, i. e., local versus collaborative, on the diagnostic on-domain and off-domain performance of AI models interpreting chest radiographs has not been assessed.
Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task Learning
To address these challenges, we propose a novel Privacy-Preserving framework that uses a set of deformable operators for secure task learning.