Model-Based Counterfactual Explanations Incorporating Feature Space Attributes for Tabular Data

20 Apr 2024  ·  Yuta Sumiya, Hayaru Shouno ·

Machine-learning models, which are known to accurately predict patterns from large datasets, are crucial in decision making. Consequently, counterfactual explanations-methods explaining predictions by introducing input perturbations-have become prominent. These perturbations often suggest ways to alter the predictions, leading to actionable recommendations. However, the current techniques require resolving the optimization problems for each input change, rendering them computationally expensive. In addition, traditional encoding methods inadequately address the perturbations of categorical variables in tabular data. Thus, this study propose FastDCFlow, an efficient counterfactual explanation method using normalizing flows. The proposed method captures complex data distributions, learns meaningful latent spaces that retain proximity, and improves predictions. For categorical variables, we employed TargetEncoding, which respects ordinal relationships and includes perturbation costs. The proposed method outperformed existing methods in multiple metrics, striking a balance between trade offs for counterfactual explanations. The source code is available in the following repository: https://github.com/sumugit/FastDCFlow.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here