1 code implementation • 22 Nov 2023 • Thomas P. Zollo, Todd Morrill, Zhun Deng, Jake C. Snell, Toniann Pitassi, Richard Zemel
The recent explosion in the capabilities of large language models has led to a wave of interest in how best to prompt a model to perform a given task.
no code implementations • NeurIPS 2023 • Zhun Deng, Thomas P. Zollo, Jake C. Snell, Toniann Pitassi, Richard Zemel
Explicit finite-sample statistical guarantees on model performance are an important ingredient in responsible machine learning.
no code implementations • 22 Mar 2023 • Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Satchit Sivakumar, Jessica Sorrell
In particular, we give sample-efficient algorithmic reductions between perfect generalization, approximate differential privacy, and replicability for a broad class of statistical problems.
1 code implementation • 27 Dec 2022 • Jake C. Snell, Thomas P. Zollo, Zhun Deng, Toniann Pitassi, Richard Zemel
In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor.
no code implementations • 26 Oct 2022 • Alexander Edmonds, Aleksandar Nikolov, Toniann Pitassi
We study two basic statistical tasks in non-interactive local differential privacy (LDP): learning and refutation.
no code implementations • 20 Jan 2022 • Russell Impagliazzo, Rex Lei, Toniann Pitassi, Jessica Sorrell
We introduce the notion of a reproducible algorithm in the context of learning.
no code implementations • 9 Feb 2021 • Noah Fleming, Mika Göös, Russell Impagliazzo, Toniann Pitassi, Robert Robere, Li-Yang Tan, Avi Wigderson
In a recent (and surprising) result, Dadush and Tiwari showed that these short refutations of the Tseitin formulas could be translated into quasi-polynomial size and depth Cutting Planes proofs, refuting a long-standing conjecture.
Computational Complexity
no code implementations • 30 Jan 2021 • Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir
We show a complexity-theoretic barrier to proving such results beyond size $O(d\log^2(d))$, but also show an explicit benign function, that can be approximated with networks of size $O(d)$ and not with networks of size $o(d/\log d)$.
no code implementations • ICLR 2021 • James Lucas, Mengye Ren, Irene Kameni, Toniann Pitassi, Richard Zemel
Machine learning models have traditionally been developed under the assumption that the training and test distributions match exactly.
1 code implementation • ICML 2020 • Elliot Creager, David Madras, Toniann Pitassi, Richard Zemel
In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups.
no code implementations • 6 Jun 2019 • Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel
We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes.
no code implementations • 7 Sep 2018 • David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel
Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders.
7 code implementations • ICML 2018 • David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel
In this paper, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream.
no code implementations • ICLR 2018 • David Madras, Toniann Pitassi, Richard Zemel
When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly.
1 code implementation • NeurIPS 2018 • David Madras, Toniann Pitassi, Richard Zemel
We propose a learning algorithm which accounts for potential biases held by external decision-makers in a system.
1 code implementation • NeurIPS 2015 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We also formalize and address the general problem of data reuse in adaptive data analysis.
no code implementations • 10 Nov 2014 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.
no code implementations • 15 Jan 2014 • Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi
Furthermore, backtracking's ability to utilize more flexible variable orderings allows us to prove that it can achieve an exponential speedup over other standard algorithms for SUMPROD on some instances.