Towards causality-aware predictions in static machine learning tasks: the linear structural causal model case

12 Jan 2020  ·  Elias Chaibub Neto ·

While counterfactual thinking has been used in ML tasks that aim to predict the consequences of different actions, policies, and interventions, it has not yet been leveraged in more traditional/static supervised learning tasks, such as the prediction of discrete labels in classification tasks or continuous responses in regression problems. Here, we propose a counterfactual approach to train "causality-aware" predictive models that are able to leverage causal information in static ML tasks. In applications plagued by confounding, the approach can be used to generate predictions that are free from the influence of observed confounders. In applications involving observed mediators, the approach can be used to generate predictions that only capture the direct or the indirect causal influences. The ability to quantify how much of the predictive performance of a learner is actually due to the causal relations of interest is important to improve the explainability of ML systems. Mechanistically, we train and evaluate supervised ML algorithms on (counterfactually) simulated data which retains only the associations generated by the causal relations of interest. In this paper we focus on linear models, where analytical results connecting covariances to causal effects are readily available. Quite importantly, we show that our approach does not require knowledge of the full causal graph. It suffices to know which variables represent potential confounders and/or mediators, and whether the features have a causal influence on the response (or vice-versa).

PDF Abstract
No code implementations yet. Submit your code now

Categories


Applications

Datasets


  Add Datasets introduced or used in this paper