Search Results for author: Peter Kairouz

Found 46 papers, 14 papers with code

Context Aware Local Differential Privacy

no code implementations ICML 2020 Jayadev Acharya, Kallista Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun

The original definition of LDP assumes that all the elements in the data domain are equally sensitive.

Can LLMs get help from other LLMs without revealing private information?

no code implementations1 Apr 2024 Florian Hartmann, Duc-Hieu Tran, Peter Kairouz, Victor Cărbune, Blaise Aguera y Arcas

In this work, we show the feasibility of applying cascade systems in such setups by equipping the local model with privacy-preserving techniques that reduce the risk of leaking private information when querying the remote model.

Privacy Preserving

Privacy-Preserving Instructions for Aligning Large Language Models

no code implementations21 Feb 2024 Da Yu, Peter Kairouz, Sewoong Oh, Zheng Xu

Service providers of large language model (LLM) applications collect user instructions in the wild and use them in further aligning LLMs with users' intentions.

Language Modelling Large Language Model +1

User Inference Attacks on Large Language Models

no code implementations13 Oct 2023 Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu

Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.

Private Federated Learning with Autotuned Compression

1 code implementation20 Jul 2023 Enayat Ullah, Christopher A. Choquette-Choo, Peter Kairouz, Sewoong Oh

We propose new techniques for reducing communication in private federated learning without the need for setting or tuning compression rates.

Federated Learning

Private Federated Frequency Estimation: Adapting to the Hardness of the Instance

no code implementations NeurIPS 2023 Jingfeng Wu, Wennan Zhu, Peter Kairouz, Vladimir Braverman

For single-round FFE, it is known that count sketching is nearly information-theoretically optimal for achieving the fundamental accuracy-communication trade-offs [Chen et al., 2022].

One-shot Empirical Privacy Estimation for Federated Learning

1 code implementation6 Feb 2023 Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, H. Brendan McMahan, Vinith M. Suriyakumar

Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss in settings where known analytical bounds are not tight.

Federated Learning

The Poisson binomial mechanism for secure and private federated learning

no code implementations9 Jul 2022 Wei-Ning Chen, Ayfer Özgür, Peter Kairouz

Unlike previous discrete DP schemes based on additive noise, our mechanism encodes local information into a parameter of the binomial distribution, and hence the output distribution is discrete with bounded support.

Federated Learning

Algorithms for bounding contribution for histogram estimation under user-level privacy

no code implementations7 Jun 2022 YuHan Liu, Ananda Theertha Suresh, Wennan Zhu, Peter Kairouz, Marco Gruteser

In this scenario, the amount of noise injected into the histogram to obtain differential privacy is proportional to the maximum user contribution, which can be amplified by few outliers.

The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning

no code implementations7 Mar 2022 Wei-Ning Chen, Christopher A. Choquette-Choo, Peter Kairouz, Ananda Theertha Suresh

We consider the problem of training a $d$ dimensional model with distributed differential privacy (DP) where secure aggregation (SecAgg) is used to ensure that the server only sees the noisy sum of $n$ model updates in every training round.

Federated Learning

Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation

no code implementations13 Jan 2022 Jiang Zhang, Lillian Clark, Matthew Clark, Konstantinos Psounis, Peter Kairouz

Cellular providers and data aggregating companies crowdsource celluar signal strength measurements from user devices to generate signal maps, which can be used to improve network performance.

Optimal Compression of Locally Differentially Private Mechanisms

no code implementations29 Oct 2021 Abhin Shah, Wei-Ning Chen, Johannes Balle, Peter Kairouz, Lucas Theis

Compressing the output of \epsilon-locally differentially private (LDP) randomizers naively leads to suboptimal utility.

The Skellam Mechanism for Differentially Private Federated Learning

1 code implementation NeurIPS 2021 Naman Agarwal, Peter Kairouz, Ziyu Liu

We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables.

Federated Learning

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning

1 code implementation23 Aug 2021 Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage

While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.

Federated Learning Misconceptions +1

Breaking The Dimension Dependence in Sparse Distribution Estimation under Communication Constraints

no code implementations16 Jun 2021 Wei-Ning Chen, Peter Kairouz, Ayfer Özgür

For the interactive setting, we propose a novel tree-based estimation scheme and show that the minimum sample-size needed to achieve dimension-free convergence can be further reduced to $n^*(s, d, b) = \tilde{O}\left( {s^2\log^2 d}/{2^b} \right)$.

On the Renyi Differential Privacy of the Shuffle Model

no code implementations11 May 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz

The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.

The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation

1 code implementation12 Feb 2021 Peter Kairouz, Ziyu Liu, Thomas Steinke

To ensure privacy, we add on-device noise and use secure aggregation so that only the noisy sum is revealed to the server.

Federated Learning Quantization

Estimating Sparse Discrete Distributions Under Local Privacy and Communication Constraints

no code implementations30 Oct 2020 Jayadev Acharya, Peter Kairouz, YuHan Liu, Ziteng Sun

We consider the problem of estimating sparse discrete distributions under local differential privacy (LDP) and communication constraints.

Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs

no code implementations17 Aug 2020 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, Ananda Theertha Suresh

We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework.

Federated Learning

Fast Dimension Independent Private AdaGrad on Publicly Estimated Subspaces

no code implementations14 Aug 2020 Peter Kairouz, Mónica Ribero, Keith Rush, Abhradeep Thakurta

In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples.

Breaking the Communication-Privacy-Accuracy Trilemma

no code implementations NeurIPS 2020 Wei-Ning Chen, Peter Kairouz, Ayfer Özgür

In particular, we consider the problems of mean estimation and frequency estimation under $\varepsilon$-local differential privacy and $b$-bit communication constraints.

DP-CGAN: Differentially Private Synthetic Data and Label Generation

1 code implementation27 Jan 2020 Reihaneh Torkzadehmahani, Peter Kairouz, Benedict Paten

Generative Adversarial Networks (GANs) are one of the well-known models to generate synthetic data including images, especially for research communities that cannot use original sensitive datasets because they are not publicly accessible.

Can You Really Backdoor Federated Learning?

no code implementations18 Nov 2019 Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, H. Brendan McMahan

This paper focuses on backdoor attacks in the federated learning setting, where the goal of the adversary is to reduce the performance of the model on targeted tasks while maintaining good performance on the main task.

Federated Learning

Generative Models for Effective ML on Private, Decentralized Datasets

3 code implementations ICLR 2020 Sean Augenstein, H. Brendan McMahan, Daniel Ramage, Swaroop Ramaswamy, Peter Kairouz, Mingqing Chen, Rajiv Mathews, Blaise Aguera y Arcas

To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.

Federated Learning

Theoretical Guarantees for Model Auditing with Finite Adversaries

no code implementations8 Nov 2019 Mario Diaz, Peter Kairouz, Jiachun Liao, Lalitha Sankar

Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data.

Privacy Preserving

Context-Aware Local Differential Privacy

no code implementations31 Oct 2019 Jayadev Acharya, Keith Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun

Local differential privacy (LDP) is a strong notion of privacy for individual users that often comes at the expense of a significant drop in utility.

Generating Fair Universal Representations using Adversarial Models

no code implementations27 Sep 2019 Peter Kairouz, Jiachun Liao, Chong Huang, Maunil Vyas, Monica Welfert, Lalitha Sankar

We present a data-driven framework for learning fair universal representations (FUR) that guarantee statistical fairness for any learning task that may not be known a priori.

Fairness Human Activity Recognition

A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

1 code implementation5 Jun 2019 Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar

We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.

Classification General Classification +1

Generative Adversarial Models for Learning Private and Fair Representations

no code implementations ICLR 2019 Chong Huang, Xiao Chen, Peter Kairouz, Lalitha Sankar, Ram Rajagopal

We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of the data.

Fairness

A Tunable Loss Function for Binary Classification

no code implementations12 Feb 2019 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz

We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).

Binary Classification Classification +2

A General Approach to Adding Differential Privacy to Iterative Training Procedures

4 code implementations15 Dec 2018 H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, Peter Kairouz

In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees.

Understanding Compressive Adversarial Privacy

no code implementations21 Sep 2018 Xiao Chen, Peter Kairouz, Ram Rajagopal

Designing a data sharing mechanism without sacrificing too much privacy can be considered as a game between data holders and malicious attackers.

Generative Adversarial Privacy

no code implementations ICLR 2019 Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal

We present a data-driven framework called generative adversarial privacy (GAP).

Siamese Generative Adversarial Privatizer for Biometric Data

no code implementations23 Apr 2018 Witold Oleszkiewicz, Peter Kairouz, Karol Piczak, Ram Rajagopal, Tomasz Trzcinski

Extensive evaluation on a biometric dataset of fingerprints and cartoon faces confirms usefulness of our simple yet effective method.

Emotion Recognition

Context-Aware Generative Adversarial Privacy

no code implementations26 Oct 2017 Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal

On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility.

Discrete Distribution Estimation under Local Privacy

no code implementations24 Feb 2016 Peter Kairouz, Keith Bonawitz, Daniel Ramage

The collection and analysis of user data drives improvements in the app and web ecosystems, but comes with risks to privacy.

Secure Multi-party Differential Privacy

no code implementations NeurIPS 2015 Peter Kairouz, Sewoong Oh, Pramod Viswanath

In this setting, each party is interested in computing a function on its private bit and all the other parties' bits.

Spy vs. Spy: Rumor Source Obfuscation

no code implementations29 Dec 2014 Giulia Fanti, Peter Kairouz, Sewoong Oh, Pramod Viswanath

Whether for fear of judgment or personal endangerment, it is crucial to keep anonymous the identity of the user who initially posted a sensitive message.

The Composition Theorem for Differential Privacy

no code implementations4 Nov 2013 Peter Kairouz, Sewoong Oh, Pramod Viswanath

Sequential querying of differentially private mechanisms degrades the overall privacy level.

Data Structures and Algorithms Cryptography and Security Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.