Search Results for author: Jonathan Ullman

Found 44 papers, 7 papers with code

How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization

no code implementations17 Feb 2024 Andrew Lowy, Jonathan Ullman, Stephen J. Wright

We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions.

Metalearning with Very Few Samples Per Task

no code implementations21 Dec 2023 Maryam Aliakbarpour, Konstantina Bairaktari, Gavin Brown, Adam Smith, Nathan Srebro, Jonathan Ullman

In multitask learning, we are given a fixed set of related learning tasks and need to output one accurate model per task, whereas in metalearning we are given tasks that are drawn i. i. d.

Binary Classification

Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning

no code implementations5 Oct 2023 Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman

The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training.

Data Poisoning

Smooth Lower Bounds for Differentially Private Algorithms via Padding-and-Permuting Fingerprinting Codes

no code implementations14 Jul 2023 Naty Peter, Eliad Tsfadia, Jonathan Ullman

Fingerprinting arguments, first introduced by Bun, Ullman, and Vadhan (STOC 2014), are the most widely used method for establishing lower bounds on the sample complexity or error of approximately differentially private (DP) algorithms.

LEMMA

TMI! Finetuned Models Leak Private Information from their Pretraining Data

1 code implementation1 Jun 2023 John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman

In this work we propose a new membership-inference threat model where the adversary only has access to the finetuned model and would like to infer the membership of the pretraining data.

Transfer Learning

Differentially Private Medians and Interior Points for Non-Pathological Data

no code implementations22 May 2023 Maryam Aliakbarpour, Rose Silver, Thomas Steinke, Jonathan Ullman

We construct differentially private estimators with low sample complexity that estimate the median of an arbitrary distribution over $\mathbb{R}$ satisfying very mild moment conditions.

From Robustness to Privacy and Back

no code implementations3 Feb 2023 Hilal Asi, Jonathan Ullman, Lydia Zakynthinou

Thus, we conclude that for any low-dimensional task, the optimal error rate for $\varepsilon$-differentially private estimators is essentially the same as the optimal error rate for estimators that are robust to adversarially corrupting $1/\varepsilon$ training samples.

A Bias-Variance-Privacy Trilemma for Statistical Estimation

no code implementations30 Jan 2023 Gautam Kamath, Argyris Mouzakis, Matthew Regehr, Vikrant Singhal, Thomas Steinke, Jonathan Ullman

The canonical algorithm for differentially private mean estimation is to first clip the samples to a bounded range and then add noise to their empirical mean.

Multitask Learning via Shared Features: Algorithms and Hardness

no code implementations7 Sep 2022 Konstantina Bairaktari, Guy Blanc, Li-Yang Tan, Jonathan Ullman, Lydia Zakynthinou

We investigate the computational efficiency of multitask learning of Boolean functions over the $d$-dimensional hypercube, that are related by means of a feature representation of size $k \ll d$ shared across all tasks.

Attribute Computational Efficiency

SNAP: Efficient Extraction of Private Properties with Poisoning

1 code implementation25 Aug 2022 Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan Ullman

Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model.

Inference Attack

How to Combine Membership-Inference Attacks on Multiple Updated Models

2 code implementations12 May 2022 Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, Roxana Geambasu

Our results on four public datasets show that our attacks are effective at using update information to give the adversary a significant advantage over attacks on standalone models, but also compared to a prior MI attack that takes advantage of model updates in a related machine-unlearning setting.

Machine Unlearning

A Private and Computationally-Efficient Estimator for Unbounded Gaussians

no code implementations8 Nov 2021 Gautam Kamath, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke, Jonathan Ullman

We give the first polynomial-time, polynomial-sample, differentially private estimator for the mean and covariance of an arbitrary Gaussian distribution $\mathcal{N}(\mu,\Sigma)$ in $\mathbb{R}^d$.

Covariance-Aware Private Mean Estimation Without Private Covariance Estimation

no code implementations NeurIPS 2021 Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman, Lydia Zakynthinou

Each of our estimators is based on a simple, general approach to designing differentially private mechanisms, but with novel technical steps to make the estimator private and sample-efficient.

Leveraging Public Data for Practical Private Query Release

1 code implementation17 Feb 2021 Terrance Liu, Giuseppe Vietri, Thomas Steinke, Jonathan Ullman, Zhiwei Steven Wu

In many statistical problems, incorporating priors can significantly improve performance.

Fair and Optimal Cohort Selection for Linear Utilities

no code implementations15 Feb 2021 Konstantina Bairaktari, Huy Le Nguyen, Jonathan Ullman

The rise of algorithmic decision-making has created an explosion of research around the fairness of those algorithms.

Decision Making Fairness

The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation

no code implementations17 Sep 2020 Albert Cheu, Jonathan Ullman

There has been a recent wave of interest in intermediate trust models for differential privacy that eliminate the need for a fully trusted central data collector, but overcome the limitations of local differential privacy.

Attribute

Fair and Useful Cohort Selection

no code implementations4 Sep 2020 Konstantina Bairaktari, Paul Langton, Huy L. Nguyen, Niklas Smedemark-Margulies, Jonathan Ullman

A challenge in fair algorithm design is that, while there are compelling notions of individual fairness, these notions typically do not satisfy desirable composition properties, and downstream applications based on fair classifiers might not preserve fairness.

Fairness

Auditing Differentially Private Machine Learning: How Private is Private SGD?

1 code implementation NeurIPS 2020 Matthew Jagielski, Jonathan Ullman, Alina Oprea

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.

Art Analysis BIG-bench Machine Learning +1

CoinPress: Practical Private Mean and Covariance Estimation

3 code implementations NeurIPS 2020 Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman

We present simple differentially private estimators for the mean and covariance of multivariate sub-Gaussian data that are accurate at small sample sizes.

A Primer on Private Statistics

no code implementations30 Apr 2020 Gautam Kamath, Jonathan Ullman

Differentially private statistical estimation has seen a flurry of developments over the last several years.

Private Query Release Assisted by Public Data

no code implementations ICML 2020 Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, Zhiwei Steven Wu

In comparison, with only private samples, this problem cannot be solved even for simple query classes with VC-dimension one, and without any private samples, a larger public sample of size $d/\alpha^2$ is needed.

Private Mean Estimation of Heavy-Tailed Distributions

no code implementations21 Feb 2020 Gautam Kamath, Vikrant Singhal, Jonathan Ullman

We give new upper and lower bounds on the minimax sample complexity of differentially private mean estimation of distributions with bounded $k$-th moments.

The Power of Factorization Mechanisms in Local and Central Differential Privacy

no code implementations19 Nov 2019 Alexander Edmonds, Aleksandar Nikolov, Jonathan Ullman

We give new characterizations of the sample complexity of answering linear queries (statistical queries) in the local and central models of differential privacy: *In the non-interactive local model, we give the first approximate characterization of the sample complexity.

Differentially Private Algorithms for Learning Mixtures of Separated Gaussians

no code implementations NeurIPS 2019 Gautam Kamath, Or Sheffet, Vikrant Singhal, Jonathan Ullman

Learning the parameters of Gaussian mixture models is a fundamental and widely studied problem with numerous applications.

Private Identity Testing for High-Dimensional Distributions

no code implementations NeurIPS 2020 Clément L. Canonne, Gautam Kamath, Audra McMillan, Jonathan Ullman, Lydia Zakynthinou

In this work we present novel differentially private identity (goodness-of-fit) testers for natural and widely studied classes of multivariate product distributions: Gaussians in $\mathbb{R}^d$ with known covariance and product distributions over $\{\pm 1\}^{d}$.

Vocal Bursts Intensity Prediction

Efficiently Estimating Erdos-Renyi Graphs with Node Differential Privacy

no code implementations NeurIPS 2019 Adam Sealfon, Jonathan Ullman

We give a simple, computationally efficient, and node-differentially-private algorithm for estimating the parameter of an Erdos-Renyi graph---that is, estimating p in a G(n, p)---with near-optimal accuracy.

Efficient Private Algorithms for Learning Large-Margin Halfspaces

no code implementations24 Feb 2019 Huy L. Nguyen, Jonathan Ullman, Lydia Zakynthinou

We present new differentially private algorithms for learning a large-margin halfspace.

Differentially Private Fair Learning

no code implementations6 Dec 2018 Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman

This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.

Attribute Fairness

The Structure of Optimal Private Tests for Simple Hypotheses

no code implementations27 Nov 2018 Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam Smith, Jonathan Ullman

Specifically, we characterize this sample complexity up to constant factors in terms of the structure of $P$ and $Q$ and the privacy level $\varepsilon$, and show that this sample complexity is achieved by a certain randomized and clamped variant of the log-likelihood ratio test.

Change Point Detection Generalization Bounds +2

The Limits of Post-Selection Generalization

no code implementations NeurIPS 2018 Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, Jonathan Ullman

While statistics and machine learning offers numerous methods for ensuring generalization, these methods often fail in the presence of adaptivity---the common practice in which the choice of analysis depends on previous interactions with the same dataset.

Privately Learning High-Dimensional Distributions

no code implementations1 May 2018 Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan Ullman

We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian and learning a product distribution over the Boolean hypercube in total variation distance.

Vocal Bursts Intensity Prediction

Local Differential Privacy for Evolving Data

no code implementations NeurIPS 2018 Matthew Joseph, Aaron Roth, Jonathan Ullman, Bo Waggoner

Moreover, existing techniques to mitigate this effect do not apply in the "local model" of differential privacy that these systems use.

Tight Lower Bounds for Locally Differentially Private Selection

no code implementations7 Feb 2018 Jonathan Ullman

We prove a tight lower bound (up to constant factors) on the sample complexity of any non-interactive local differentially private protocol for optimizing a linear function over the simplex.

PAC learning

Skyline Identification in Multi-Armed Bandits

no code implementations12 Nov 2017 Albert Cheu, Ravi Sundaram, Jonathan Ullman

There is an ordered set of $n$ arms $A[1],\dots, A[n]$, each with some stochastic reward drawn from some unknown bounded distribution.

Multi-Armed Bandits

PSI (Ψ): a Private data Sharing Interface

3 code implementations14 Sep 2016 Marco Gaboardi, James Honaker, Gary King, Jack Murtagh, Kobbi Nissim, Jonathan Ullman, Salil Vadhan

We provide an overview of PSI ("a Private data Sharing Interface"), a system we are developing to enable researchers in the social sciences and other fields to share and explore privacy-sensitive datasets with the strong privacy protections of differential privacy.

Cryptography and Security Computers and Society Methodology

Multidimensional Dynamic Pricing for Welfare Maximization

no code implementations19 Jul 2016 Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu

We are able to apply this technique to the setting of unit demand buyers despite the fact that in that setting the goods are not divisible, and the natural fractional relaxation of a unit demand valuation is not strongly concave.

Make Up Your Mind: The Price of Online Queries in Differential Privacy

no code implementations15 Apr 2016 Mark Bun, Thomas Steinke, Jonathan Ullman

The queries may be chosen adversarially from a larger set Q of allowable queries in one of three ways, which we list in order from easiest to hardest to answer: Offline: The queries are chosen all at once and the differentially private mechanism answers the queries in a single batch.

Algorithmic Stability for Adaptive Data Analysis

no code implementations8 Nov 2015 Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, Jonathan Ullman

Specifically, suppose there is an unknown distribution $\mathbf{P}$ and a set of $n$ independent samples $\mathbf{x}$ is drawn from $\mathbf{P}$.

Watch and Learn: Optimizing from Revealed Preferences Feedback

no code implementations4 Apr 2015 Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu

In this paper we present an approach to solving for the leader's optimal strategy in certain Stackelberg games where the follower's utility function (and thus the subsequent best response of the follower) is unknown.

More General Queries and Less Generalization Error in Adaptive Data Analysis

no code implementations16 Mar 2015 Raef Bassily, Adam Smith, Thomas Steinke, Jonathan Ullman

However, generalization error is typically bounded in a non-adaptive model, where all questions are specified before the dataset is drawn.

Between Pure and Approximate Differential Privacy

no code implementations24 Jan 2015 Thomas Steinke, Jonathan Ullman

The novelty of our bound is that it depends optimally on the parameter $\delta$, which loosely corresponds to the probability that the algorithm fails to be private, and is the first to smoothly interpolate between approximate differential privacy ($\delta > 0$) and pure differential privacy ($\delta = 0$).

Interactive Fingerprinting Codes and the Hardness of Preventing False Discovery

no code implementations5 Oct 2014 Thomas Steinke, Jonathan Ullman

We show an essentially tight bound on the number of adaptively chosen statistical queries that a computationally efficient algorithm can answer accurately given $n$ samples from an unknown distribution.

valid

Preventing False Discovery in Interactive Data Analysis is Hard

no code implementations6 Aug 2014 Moritz Hardt, Jonathan Ullman

In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.

valid

Privately Solving Linear Programs

no code implementations15 Feb 2014 Justin Hsu, Aaron Roth, Tim Roughgarden, Jonathan Ullman

In this paper, we initiate the systematic study of solving linear programs under differential privacy.

Cannot find the paper you are looking for? You can Submit a new open access paper.