1 code implementation • 12 Sep 2020 • Tiago Botari, Frederik Hvilshøj, Rafael Izbicki, Andre C. P. L. F. de Carvalho
Additionally, we introduce modifications to standard training algorithms of local interpretable models fostering more robust explanations, even allowing the production of counterfactual examples.
1 code implementation • 11 Oct 2019 • Victor Coscrato, Marco Henrique de Almeida Inácio, Tiago Botari, Rafael Izbicki
We develop NLS (neural local smoother), a method that is complex enough to give good predictions, and yet gives solutions that are easy to be interpreted without the need of using a separate interpreter.
no code implementations • 31 Jul 2019 • Tiago Botari, Rafael Izbicki, Andre C. P. L. F. de Carvalho
For such, they induce interpretable models on the neighborhood of the instance to be explained.