no code implementations • 4 Dec 2023 • Alessandro Farace di Villaforesta, Lucie Charlotte Magister, Pietro Barbiero, Pietro Liò
To address the challenge of the ``black-box" nature of deep learning in medical settings, we combine GCExplainer - an automated concept discovery solution - along with Logic Explained Networks to provide global explanations for Graph Neural Networks.
1 code implementation • 25 Nov 2023 • Jonas Jürß, Lucie Charlotte Magister, Pietro Barbiero, Pietro Liò, Nikola Simidjievski
A line of interpretable methods approach this by discovering a small set of relevant concepts as subgraphs in the last GNN layer that together explain the prediction.
1 code implementation • 1 Jul 2023 • Gabriele Dominici, Pietro Barbiero, Lucie Charlotte Magister, Pietro Liò, Nikola Simidjievski
Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling task.
no code implementations • 17 May 2023 • Georg Wölflein, Lucie Charlotte Magister, Pietro Liò, David J. Harrison, Ognjen Arandjelović
We evaluate our model on a custom MNIST-based MIL dataset that requires the consideration of relative spatial information, as well as on CAMELYON16, a publicly available cancer metastasis detection dataset, where we achieve a test AUROC score of 0. 91.
1 code implementation • 27 Apr 2023 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio', Frederic Precioso, Mateja Jamnik, Giuseppe Marra
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust.
1 code implementation • 9 Feb 2023 • Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio
Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.
Explainable Artificial Intelligence (XAI) Molecular Property Prediction +1
no code implementations • 16 Dec 2022 • Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn
Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets.
1 code implementation • 22 Aug 2022 • Han Xuanyuan, Pietro Barbiero, Dobrik Georgiev, Lucie Charlotte Magister, Pietro Lió
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model.
no code implementations • 27 Jul 2022 • Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, Pietro Lio
The opaque reasoning of Graph Neural Networks induces a lack of human trust.
no code implementations • 25 Jul 2021 • Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, Pietro Liò
Motivated by the aim of providing global explanations, we adapt the well-known Automated Concept-based Explanation approach (Ghorbani et al., 2019) to GNN node and graph classification, and propose GCExplainer.