Search Results for author: Toniann Pitassi

Found 18 papers, 6 papers with code

Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models

1 code implementation22 Nov 2023 Thomas P. Zollo, Todd Morrill, Zhun Deng, Jake C. Snell, Toniann Pitassi, Richard Zemel

The recent explosion in the capabilities of large language models has led to a wave of interest in how best to prompt a model to perform a given task.

Code Generation

Distribution-Free Statistical Dispersion Control for Societal Applications

no code implementations NeurIPS 2023 Zhun Deng, Thomas P. Zollo, Jake C. Snell, Toniann Pitassi, Richard Zemel

Explicit finite-sample statistical guarantees on model performance are an important ingredient in responsible machine learning.

Stability is Stable: Connections between Replicability, Privacy, and Adaptive Generalization

no code implementations22 Mar 2023 Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Satchit Sivakumar, Jessica Sorrell

In particular, we give sample-efficient algorithmic reductions between perfect generalization, approximate differential privacy, and replicability for a broad class of statistical problems.

PAC learning

Quantile Risk Control: A Flexible Framework for Bounding the Probability of High-Loss Predictions

1 code implementation27 Dec 2022 Jake C. Snell, Thomas P. Zollo, Zhun Deng, Toniann Pitassi, Richard Zemel

In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor.

Learning versus Refutation in Noninteractive Local Differential Privacy

no code implementations26 Oct 2022 Alexander Edmonds, Aleksandar Nikolov, Toniann Pitassi

We study two basic statistical tasks in non-interactive local differential privacy (LDP): learning and refutation.

PAC learning

Reproducibility in Learning

no code implementations20 Jan 2022 Russell Impagliazzo, Rex Lei, Toniann Pitassi, Jessica Sorrell

We introduce the notion of a reproducible algorithm in the context of learning.

On the Power and Limitations of Branch and Cut

no code implementations9 Feb 2021 Noah Fleming, Mika Göös, Russell Impagliazzo, Toniann Pitassi, Robert Robere, Li-Yang Tan, Avi Wigderson

In a recent (and surprising) result, Dadush and Tiwari showed that these short refutations of the Tseitin formulas could be translated into quasi-polynomial size and depth Cutting Planes proofs, refuting a long-standing conjecture.

Computational Complexity

Size and Depth Separation in Approximating Benign Functions with Neural Networks

no code implementations30 Jan 2021 Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir

We show a complexity-theoretic barrier to proving such results beyond size $O(d\log^2(d))$, but also show an explicit benign function, that can be approximated with networks of size $O(d)$ and not with networks of size $o(d/\log d)$.

Theoretical bounds on estimation error for meta-learning

no code implementations ICLR 2021 James Lucas, Mengye Ren, Irene Kameni, Toniann Pitassi, Richard Zemel

Machine learning models have traditionally been developed under the assumption that the training and test distributions match exactly.

Few-Shot Learning

Causal Modeling for Fairness in Dynamical Systems

1 code implementation ICML 2020 Elliot Creager, David Madras, Toniann Pitassi, Richard Zemel

In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups.

Fairness

Flexibly Fair Representation Learning by Disentanglement

no code implementations6 Jun 2019 Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel

We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes.

Disentanglement Fairness +1

Fairness Through Causal Awareness: Learning Latent-Variable Models for Biased Data

no code implementations7 Sep 2018 David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel

Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders.

Attribute Fairness +1

Predict Responsibly: Increasing Fairness by Learning to Defer

no code implementations ICLR 2018 David Madras, Toniann Pitassi, Richard Zemel

When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly.

Decision Making Fairness

Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer

1 code implementation NeurIPS 2018 David Madras, Toniann Pitassi, Richard Zemel

We propose a learning algorithm which accounts for potential biases held by external decision-makers in a system.

Decision Making Fairness

Preserving Statistical Validity in Adaptive Data Analysis

no code implementations10 Nov 2014 Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth

We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.

Two-sample testing

Solving #SAT and Bayesian Inference with Backtracking Search

no code implementations15 Jan 2014 Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi

Furthermore, backtracking's ability to utilize more flexible variable orderings allows us to prove that it can achieve an exponential speedup over other standard algorithms for SUMPROD on some instances.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.