Search Results for author: Adam Dziedzic

Found 21 papers, 7 papers with code

Decentralised, Collaborative, and Privacy-preserving Machine Learning for Multi-Hospital Data

1 code implementation31 Jan 2024 Congyu Fang, Adam Dziedzic, Lin Zhang, Laura Oliva, Amol Verma, Fahad Razak, Nicolas Papernot, Bo wang

In addition, the ML models trained with DeCaPH framework in general outperform those trained solely with the private datasets from individual parties, showing that DeCaPH enhances the model generalizability.

Mortality Prediction Privacy Preserving

Memorization in Self-Supervised Learning Improves Downstream Generalization

1 code implementation19 Jan 2024 Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch

Our definition compares the difference in alignment of representations for data points and their augmented views returned by both encoders that were trained on these data points and encoders that were not.

Memorization Self-Supervised Learning

Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation

no code implementations9 Jan 2023 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.

Federated Learning

Dataset Inference for Self-Supervised Models

no code implementations16 Sep 2022 Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan, Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot

We introduce a new dataset inference defense, which uses the private training set of the victim encoder model to attribute its ownership in the event of stealing.

Attribute Density Estimation

$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations

no code implementations25 Jul 2022 Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot

We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations to compute $p$-values associated with the end-to-end model prediction.

Autonomous Driving Out-of-Distribution Detection +1

Selective Classification Via Neural Network Training Dynamics

no code implementations26 May 2022 Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot

Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.

Classification

On the Difficulty of Defending Self-Supervised Learning against Model Extraction

1 code implementation16 May 2022 Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, Nicolas Papernot

We construct several novel attacks and find that approaches that train directly on a victim's stolen representations are query efficient and enable high accuracy for downstream models.

Model extraction Self-Supervised Learning

Individualized PATE: Differentially Private Machine Learning with Individual Privacy Guarantees

no code implementations21 Feb 2022 Franziska Boenisch, Christopher Mühl, Roy Rinberg, Jannis Ihrig, Adam Dziedzic

Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP).

BIG-bench Machine Learning

Increasing the Cost of Model Extraction with Calibrated Proof of Work

no code implementations ICLR 2022 Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, Nicolas Papernot

Since we calibrate the effort required to complete the proof-of-work to each query, this only introduces a slight overhead for regular users (up to 2x).

BIG-bench Machine Learning Model extraction

When the Curious Abandon Honesty: Federated Learning Is Not Private

1 code implementation6 Dec 2021 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.

Federated Learning Privacy Preserving +1

CaPC Learning: Confidential and Private Collaborative Learning

1 code implementation ICLR 2021 Christopher A. Choquette-Choo, Natalie Dullerud, Adam Dziedzic, Yunxiang Zhang, Somesh Jha, Nicolas Papernot, Xiao Wang

There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data.

Fairness Federated Learning

Pretrained Transformers Improve Out-of-Distribution Robustness

1 code implementation ACL 2020 Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, Dawn Song

Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions?

Machine Learning enabled Spectrum Sharing in Dense LTE-U/Wi-Fi Coexistence Scenarios

no code implementations18 Mar 2020 Adam Dziedzic, Vanlin Sathya, Muhammad Iqbal Rochman, Monisha Ghosh, Sanjay Krishnan

The promise of ML techniques in solving non-linear problems influenced this work which aims to apply known ML techniques and develop new ones for wireless spectrum sharing between Wi-Fi and LTE in the unlicensed spectrum.

BIG-bench Machine Learning

Analysis of Random Perturbations for Robust Convolutional Neural Networks

no code implementations8 Feb 2020 Adam Dziedzic, Sanjay Krishnan

Recent work has extensively shown that randomized perturbations of neural networks can improve robustness to adversarial attacks.

Machine Learning based detection of multiple Wi-Fi BSSs for LTE-U CSAT

no code implementations21 Nov 2019 Vanlin Sathya, Adam Dziedzic, Monisha Ghosh, Sanjay Krishnan

This approach delivers an accuracy close to 100% compared to auto-correlation (AC) and energy detection (ED) approaches.

BIG-bench Machine Learning

A Perturbation Analysis of Input Transformations for Adversarial Attacks

no code implementations25 Sep 2019 Adam Dziedzic, Sanjay Krishnan

The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today.

Testing Robustness Against Unforeseen Adversaries

3 code implementations21 Aug 2019 Max Kaufmann, Daniel Kang, Yi Sun, Steven Basart, Xuwang Yin, Mantas Mazeika, Akul Arora, Adam Dziedzic, Franziska Boenisch, Tom Brown, Jacob Steinhardt, Dan Hendrycks

To narrow in on this discrepancy between research and reality we introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries, including eighteen new non-L_p attacks.

Adversarial Defense Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.