Search Results for author: Pritish Kamath

Found 21 papers, 1 papers with code

Differentially Private Optimization with Sparse Gradients

no code implementations16 Apr 2024 Badih Ghazi, Cristóbal Guzmán, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of individual gradients.

How Private is DP-SGD?

no code implementations26 Mar 2024 Lynn Chua, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, Chiyuan Zhang

We demonstrate a substantial gap between the privacy guarantees of the Adaptive Batch Linear Queries (ABLQ) mechanism under different types of batch sampling: (i) Shuffling, and (ii) Poisson subsampling; the typical analysis of Differentially Private Stochastic Gradient Descent (DP-SGD) follows by interpreting it as a post-processing of ABLQ.

Training Differentially Private Ad Prediction Models with Semi-Sensitive Features

no code implementations26 Jan 2024 Lynn Chua, Qiliang Cui, Badih Ghazi, Charlie Harrison, Pritish Kamath, Walid Krichene, Ravi Kumar, Pasin Manurangsi, Krishna Giri Narra, Amer Sinha, Avinash Varadarajan, Chiyuan Zhang

Motivated by problems arising in digital advertising, we introduce the task of training differentially private (DP) machine learning models with semi-sensitive features.

Ticketed Learning-Unlearning Schemes

no code implementations27 Jun 2023 Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush Sekhari, Chiyuan Zhang

Subsequently, given any subset of examples that wish to be unlearnt, the goal is to learn, without the knowledge of the original training dataset, a good predictor that is identical to the predictor that would have been produced when learning from scratch on the surviving examples.

On User-Level Private Convex Optimization

no code implementations8 May 2023 Badih Ghazi, Pritish Kamath, Ravi Kumar, Raghu Meka, Pasin Manurangsi, Chiyuan Zhang

We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees.

Regression with Label Differential Privacy

no code implementations12 Dec 2022 Badih Ghazi, Pritish Kamath, Ravi Kumar, Ethan Leeman, Pasin Manurangsi, Avinash V Varadarajan, Chiyuan Zhang

We study the task of training regression models with the guarantee of label differential privacy (DP).

regression

Private Ad Modeling with DP-SGD

no code implementations21 Nov 2022 Carson Denison, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Krishna Giri Narra, Amer Sinha, Avinash V Varadarajan, Chiyuan Zhang

A well-known algorithm in privacy-preserving ML is differentially private stochastic gradient descent (DP-SGD).

Privacy Preserving

Anonymized Histograms in Intermediate Privacy Models

no code implementations27 Oct 2022 Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

We study the problem of privately computing the anonymized histogram (a. k. a.

Private Isotonic Regression

no code implementations27 Oct 2022 Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

For the most general problem of isotonic regression over a partially ordered set (poset) $\mathcal{X}$ and for any Lipschitz loss function, we obtain a pure-DP algorithm that, given $n$ input points, has an expected excess empirical risk of roughly $\mathrm{width}(\mathcal{X}) \cdot \log|\mathcal{X}| / n$, where $\mathrm{width}(\mathcal{X})$ is the width of the poset.

regression

Faster Privacy Accounting via Evolving Discretization

no code implementations10 Jul 2022 Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

We introduce a new algorithm for numerical composition of privacy random variables, useful for computing the accurate differential privacy parameters for composition of mechanisms.

Connect the Dots: Tighter Discrete Approximations of Privacy Loss Distributions

no code implementations10 Jul 2022 Vadym Doroshenko, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

The privacy loss distribution (PLD) provides a tight characterization of the privacy loss of a mechanism in the context of differential privacy (DP).

Do More Negative Samples Necessarily Hurt in Contrastive Learning?

no code implementations3 May 2022 Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath

Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more "negative samples" in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a "collision-coverage" trade-off.

Contrastive Learning

On the Power of Differentiable Learning versus PAC and SQ Learning

no code implementations NeurIPS 2021 Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, Nathan Srebro

With fine enough precision relative to minibatch size, namely when $b \rho$ is small enough, SGD can go beyond SQ learning and simulate any sample-based learning algorithm and thus its learning power is equivalent to that of PAC learning; this extends prior work that achieved this result for $b=1$.

PAC learning

Supervised Bayesian Specification Inference from Demonstrations

no code implementations6 Jul 2021 Ankit Shah, Pritish Kamath, Shen Li, Patrick Craven, Kevin Landers, Kevin Oden, Julie Shah

When observing task demonstrations, human apprentices are able to identify whether a given task is executed correctly long before they gain expertise in actually performing that task.

Probabilistic Programming

Understanding the Eluder Dimension

no code implementations14 Apr 2021 Gene Li, Pritish Kamath, Dylan J. Foster, Nathan Srebro

We provide new insights on eluder dimension, a complexity measure that has been extensively used to bound the regret of algorithms for online bandits and reinforcement learning with function approximation.

Active Learning

Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels

no code implementations1 Mar 2021 Eran Malach, Pritish Kamath, Emmanuel Abbe, Nathan Srebro

Complementing this, we show that without these conditions, gradient descent can in fact learn with small error even when no kernel method, in particular using the tangent kernel, can achieve a non-trivial advantage over random guessing.

Does Invariant Risk Minimization Capture Invariance?

no code implementations4 Jan 2021 Pritish Kamath, Akilesh Tangella, Danica J. Sutherland, Nathan Srebro

We show that the Invariant Risk Minimization (IRM) formulation of Arjovsky et al. (2019) can fail to capture "natural" invariances, at least when used in its practical "linear" form, and even on very simple problems which directly follow the motivating examples for IRM.

Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity

no code implementations9 Mar 2020 Pritish Kamath, Omar Montasser, Nathan Srebro

We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to approximate, rather then exactly represent, a given hypothesis class.

Bayesian Inference of Temporal Task Specifications from Demonstrations

no code implementations NeurIPS 2018 Ankit Shah, Pritish Kamath, Julie A. Shah, Shen Li

When observing task demonstrations, human apprentices are able to identify whether a given task is executed correctly long before they gain expertise in actually performing that task.

Probabilistic Programming

The Optimality of Correlated Sampling

1 code implementation4 Dec 2016 Mohammad Bavarian, Badih Ghazi, Elad Haramaty, Pritish Kamath, Ronald L. Rivest, Madhu Sudan

In this note, we give a surprisingly simple proof that this protocol is in fact tight.

Computational Complexity Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.