3 code implementations • 27 Nov 2022 • Juan L. Gamella, Armeen Taeb, Christina Heinze-Deml, Peter Bühlmann
We leverage this procedure and evaluate the performance of GnIES on synthetic, real, and semi-synthetic data sets.
1 code implementation • 29 Sep 2020 • Christina Heinze-Deml, Diane Bouchacourt
Contrarily to humans who have the ability to recombine familiar expressions to create novel ones, modern neural networks struggle to do so.
2 code implementations • NeurIPS 2020 • Juan L. Gamella, Christina Heinze-Deml
A fundamental difficulty of causal learning is that causal models can generally not be fully identified based on observational data only.
no code implementations • NeurIPS 2019 • Fanny Yang, Zuowen Wang, Christina Heinze-Deml
This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness).
no code implementations • ICLR 2018 • Christina Heinze-Deml, Nicolai Meinshausen
If two or more samples share the same class and identifier, (Y, ID)=(y, i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features.
1 code implementation • 31 Oct 2017 • Christina Heinze-Deml, Nicolai Meinshausen
Our goal is to minimize a loss that is robust under changes in the distribution of these style features.
no code implementations • 28 Jun 2017 • Christina Heinze-Deml, Marloes H. Maathuis, Nicolai Meinshausen
Causal models can be viewed as a special class of graphical models that not only represent the distribution of the observed system but also the distributions under external interventions.
Methodology
1 code implementation • 26 Jun 2017 • Christina Heinze-Deml, Jonas Peters, Nicolai Meinshausen
In this work, we present and evaluate an array of methods for nonlinear and nonparametric versions of ICP for learning the causal parents of given target variables.
Methodology
no code implementations • 1 Mar 2017 • Christina Heinze-Deml, Brian McWilliams, Nicolai Meinshausen
Privacy is crucial in many applications of machine learning.