Search Results for author: Hinda Haned

Found 6 papers, 4 papers with code

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

1 code implementation14 Apr 2021 Kim de Bie, Ana Lucic, Hinda Haned

In hybrid human-AI systems, users need to decide whether or not to trust an algorithmic prediction while the true error in the prediction is unknown.

regression

FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles

1 code implementation27 Nov 2019 Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke

Model interpretability has become an important problem in machine learning (ML) due to the increased effect that algorithmic decisions have on humans.

counterfactual

Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting

1 code implementation17 Jul 2019 Ana Lucic, Hinda Haned, Maarten de Rijke

Given a large error, MC-BRP determines (1) feature values that would result in a reasonable prediction, and (2) general trends between each feature and the target, both based on Monte Carlo simulations.

Global Aggregations of Local Explanations for Black Box models

no code implementations5 Jul 2019 Ilse van der Linden, Hinda Haned, Evangelos Kanoulas

We present Global Aggregations of Local Explanations (GALE) with the objective to provide insights in a model's global decision making process.

Decision Making Open-Ended Question Answering

Explaining Predictions from Tree-based Boosting Ensembles

no code implementations4 Jul 2019 Ana Lucic, Hinda Haned, Maarten de Rijke

Understanding how "black-box" models arrive at their predictions has sparked significant interest from both within and outside the AI community.

counterfactual Counterfactual Explanation

Cannot find the paper you are looking for? You can Submit a new open access paper.