no code implementations • 29 Dec 2023 • Benjamin Eyre, Elliot Creager, David Madras, Vardan Papyan, Richard Zemel
Designing deep neural network classifiers that perform robustly on distributions differing from the available training data is an active area of machine learning research.
no code implementations • 29 Dec 2023 • Elliot Creager, Richard Zemel
Research on algorithmic recourse typically considers how an individual can reasonably change an unfavorable automated decision when interacting with a fixed decision-making system.
no code implementations • 19 Dec 2023 • Elliot Creager
Here we observe that insofar as standard training methods tend to learn such features, this propensity can be leveraged to search for partitions of training data that expose this inconsistency, ultimately promoting learning algorithms invariant to spurious features.
no code implementations • 8 Dec 2023 • Parand A. Alamdari, Toryn Q. Klassen, Elliot Creager, Sheila A. McIlraith
In this paper we investigate the notion of fairness in the context of sequential decision making where multiple stakeholders can be affected by the outcomes of decisions.
no code implementations • ICCV 2023 • Arjun Mani, Ishaan Preetam Chandratreya, Elliot Creager, Carl Vondrick, Richard Zemel
Modeling the mechanics of fluid in complex scenes is vital to applications in design, graphics, and robotics.
1 code implementation • 20 Oct 2022 • Silviu Pitis, Elliot Creager, Ajay Mandlekar, Animesh Garg
To this end, we show that (1) known local structure in the environment transitions is sufficient for an exponential reduction in the sample complexity of training a dynamics model, and (2) a locally factored dynamics model provably generalizes out-of-distribution to unseen states and actions.
1 code implementation • 12 Nov 2020 • Robert Adragna, Elliot Creager, David Madras, Richard Zemel
Robustness is of central importance in machine learning and has given rise to the fields of domain generalization and invariant learning, which are concerned with improving performance on a test distribution distinct from but related to the training distribution.
1 code implementation • 14 Oct 2020 • Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
no code implementations • 28 Sep 2020 • Elliot Creager, Joern-Henrik Jacobsen, Richard Zemel
Developing learning approaches that are not overly sensitive to the training distribution is central to research on domain- or out-of-distribution generalization, robust optimization and fairness.
no code implementations • ICML 2020 • Martin Mladenov, Elliot Creager, Omer Ben-Porat, Kevin Swersky, Richard Zemel, Craig Boutilier
We develop several scalable techniques to solve the matching problem, and also draw connections to various notions of user regret and fairness, arguing that these outcomes are fairer in a utilitarian sense.
1 code implementation • NeurIPS 2020 • Silviu Pitis, Elliot Creager, Animesh Garg
Many dynamic processes, including common scenarios in robotic control and reinforcement learning (RL), involve a set of interacting subprocesses.
2 code implementations • 14 Jun 2020 • Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer
The focus of disentanglement approaches has been on identifying independent factors of variation in data.
1 code implementation • ICML 2020 • Elliot Creager, David Madras, Toniann Pitassi, Richard Zemel
In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups.
no code implementations • 6 Jun 2019 • Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel
We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes.
no code implementations • 7 Sep 2018 • David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel
Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders.
1 code implementation • ICLR 2019 • Chun-Hao Chang, Elliot Creager, Anna Goldenberg, David Duvenaud
We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision?
7 code implementations • ICML 2018 • David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel
In this paper, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream.