1 code implementation • 27 Sep 2023 • Jeffrey N. Clark, Edward A. Small, Nawid Keshtmand, Michelle W. L. Wan, Elena Fillola Mayoral, Enrico Werner, Christopher P. Bourdeaux, Raul Santos-Rodriguez
Counterfactual explanations, and their associated algorithmic recourse, are typically leveraged to understand, explain, and potentially alter a prediction coming from a black-box classifier.
no code implementations • 10 Feb 2023 • Nawid Keshtmand, Raul Santos-Rodriguez, Jonathan Lawry
Two fundamental requirements for the deployment of machine learning models in safety-critical systems are to be able to detect out-of-distribution (OOD) data correctly and to be able to explain the prediction of the model.
no code implementations • 6 Nov 2022 • Nawid Keshtmand, Raul Santos-Rodriguez, Jonathan Lawry
We see that OOD samples tend to be classified into classes that have a distribution similar to the distribution of the entire dataset.