no code implementations • 13 Jul 2023 • Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.
1 code implementation • 26 May 2023 • Dan Ley, Saumitra Mishra, Daniele Magazzeni
Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods prominent in fairness, recourse and model understanding.
1 code implementation • 19 May 2023 • Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.
no code implementations • 9 Feb 2023 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
This composition can be represented in the form of a tree.
no code implementations • 16 Oct 2022 • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".
no code implementations • 6 Jul 2022 • Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni
In this work, we propose a novel strategy -- that we call RobX -- to generate robust counterfactuals for tree-based ensembles, e. g., XGBoost.
no code implementations • 14 Apr 2022 • Dan Ley, Saumitra Mishra, Daniele Magazzeni
Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods emerging in fairness, recourse and model understanding.
no code implementations • 30 Oct 2021 • Saumitra Mishra, Sanghamitra Dutta, Jason Long, Daniele Magazzeni
There exist several methods that aim to address the crucial task of understanding the behaviour of AI/ML models.
no code implementations • 29 Sep 2021 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions, which is called a metamodel (a model of the model).
1 code implementation • 15 May 2020 • Saumitra Mishra, Emmanouil Benetos, Bob L. Sturm, Simon Dixon
One way to analyse the behaviour of machine learning models is through local explanations that highlight input features that maximally influence model predictions.
no code implementations • 21 Apr 2019 • Saumitra Mishra, Daniel Stoller, Emmanouil Benetos, Bob L. Sturm, Simon Dixon
However, this requires a careful selection of hyper-parameters to generate interpretable examples for each neuron of interest, and current methods rely on a manual, qualitative evaluation of each setting, which is prohibitively slow.