1 code implementation • 1 Sep 2023 • Laura State, Salvatore Ruggieri, Franco Turini
Explaining opaque Machine Learning (ML) models is an increasingly relevant problem.
1 code implementation • 29 May 2023 • Laura State, Salvatore Ruggieri, Franco Turini
REASONX provides interactive contrastive explanations that can be augmented by background knowledge, and allows to operate under a setting of under-specified information, leading to increased flexibility in the provided explanations.
no code implementations • 14 Mar 2023 • Carlos Mougan, Laura State, Antonio Ferrara, Salvatore Ruggieri, Steffen Staab
Liberalism-oriented political philosophy reasons that all individuals should be treated equally independently of their protected characteristics.
1 code implementation • 11 Nov 2022 • Laura State, Hadrien Salat, Stefania Rubrichi, Zbigniew Smoreda
We conclude our paper by pointing to the two main challenges we encountered during our work: data processing and model design that might be restricted by currently available XAI methods, and the importance of domain knowledge to interpret explanations.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1