1 code implementation • NLPerspectives (LREC) 2022 • Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri
Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.
no code implementations • 29 Nov 2023 • Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni
High-dimensional time series data poses challenges due to its dynamic nature, varying lengths, and presence of missing values.
no code implementations • 15 Nov 2023 • Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein
While the impact of these biases has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, offering a constrained view of the nature of societal biases within language models.
no code implementations • 30 Oct 2023 • Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith, Simone Stumpf
As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 29 Aug 2023 • Riccardo Guidotti, Salvatore Ruggieri
In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative power.
1 code implementation • 12 Jun 2023 • Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, Davide Bacciu
Continual Learning trains models on a stream of data, with the aim of learning new information without forgetting previous knowledge.
1 code implementation • 18 Jan 2023 • Martina Cinquini, Fosca Giannotti, Riccardo Guidotti
However, typically, the variables of a dataset depend on one another, and these dependencies are not considered in data generation leading to the creation of implausible records.
no code implementations • 18 Jan 2023 • Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo
We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples.
1 code implementation • 13 Dec 2022 • Alessandro Poggiali, Alessandro Berti, Anna Bernasconi, Gianna M. Del Corso, Riccardo Guidotti
In particular, we exploit quantum phenomena to speed up the computation of distances.
1 code implementation • 10 Dec 2022 • Martina Cinquini, Riccardo Guidotti
A main drawback of eXplainable Artificial Intelligence (XAI) approaches is the feature independence assumption, hindering the study of potential variable dependencies.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 22 Nov 2021 • Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo, Fosca Giannotti
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems.
1 code implementation • 25 Feb 2021 • Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, Salvatore Rinzivillo
The widespread adoption of black-box models in Artificial Intelligence has enhanced the need for explanation methods to reveal how these obscure models reach specific decisions.
1 code implementation • 19 Jan 2021 • Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti
Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other.
no code implementations • 27 Jan 2020 • Riccardo Guidotti, Anna Monreale, Stan Matwin, Dino Pedreschi
We present an approach to explain the decisions of black box models for image classification.
no code implementations • 22 Oct 2018 • Riccardo Guidotti, Salvatore Ruggieri
Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent.
no code implementations • 26 Jun 2018 • Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Salvatore Ruggieri, Franco Turini
We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.
1 code implementation • 28 May 2018 • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti
Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.
no code implementations • 6 Feb 2018 • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti
The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation.