no code implementations • 8 Apr 2024 • Tejas Kasetty, Divyat Mahajan, Gintare Karolina Dziugaite, Alexandre Drouin, Dhanya Sridhar
Numerous decision-making tasks require estimating causal effects under interventions on different parts of a system.
1 code implementation • 26 Nov 2022 • Sébastien Lachapelle, Tristan Deleu, Divyat Mahajan, Ioannis Mitliagkas, Yoshua Bengio, Simon Lacoste-Julien, Quentin Bertrand
Although disentangled representations are often said to be beneficial for downstream tasks, current empirical and theoretical understanding is limited.
1 code implementation • 3 Nov 2022 • Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation.
1 code implementation • 24 Sep 2022 • Kartik Ahuja, Divyat Mahajan, Yixin Wang, Yoshua Bengio
Can interventional data facilitate causal representation learning?
1 code implementation • 10 Apr 2022 • Kartik Ahuja, Divyat Mahajan, Vasilis Syrgkanis, Ioannis Mitliagkas
In this work, we depart from these assumptions and ask: a) How can we get disentanglement when the auxiliary information does not provide conditional independence over the factors of variation?
1 code implementation • 7 Oct 2021 • Divyat Mahajan, Shruti Tople, Amit Sharma
Through extensive evaluation on a synthetic dataset and image datasets like MNIST, Fashion-MNIST, and Chest X-rays, we show that a lower OOD generalization gap does not imply better robustness to MI attacks.
no code implementations • 11 Nov 2020 • Yanbo Xu, Divyat Mahajan, Liz Manrao, Amit Sharma, Emre Kiciman
For many kinds of interventions, such as a new advertisement, marketing intervention, or feature recommendation, it is important to target a specific subset of people for maximizing its benefits at minimum cost or potential harm.
2 code implementations • 10 Nov 2020 • Ramaravind Kommiya Mothilal, Divyat Mahajan, Chenhao Tan, Amit Sharma
In addition, by restricting the features that can be modified for generating counterfactual examples, we find that the top-k features from LIME or SHAP are often neither necessary nor sufficient explanations of a model's prediction.
1 code implementation • arXiv 2020 • Divyat Mahajan, Shruti Tople, Amit Sharma
In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label.
Ranked #1 on Domain Generalization on Rotated Fashion-MNIST
3 code implementations • 6 Dec 2019 • Divyat Mahajan, Chenhao Tan, Amit Sharma
For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world.
1 code implementation • 7 Jun 2019 • Varun Khare, Divyat Mahajan, Homanga Bharadhwaj, Vinay Verma, Piyush Rai
Our approach is based on end-to-end learning of the class distributions of seen classes and unseen classes.
Ranked #1 on Zero-Shot Learning on CUB-200 - 0-Shot Learning (using extra training data)