Search Results for author: Lucie Charlotte Magister

Found 10 papers, 5 papers with code

Digital Histopathology with Graph Neural Networks: Concepts and Explanations for Clinicians

no code implementations4 Dec 2023 Alessandro Farace di Villaforesta, Lucie Charlotte Magister, Pietro Barbiero, Pietro Liò

To address the challenge of the ``black-box" nature of deep learning in medical settings, we combine GCExplainer - an automated concept discovery solution - along with Logic Explained Networks to provide global explanations for Graph Neural Networks.

graph construction Panoptic Segmentation

Everybody Needs a Little HELP: Explaining Graphs via Hierarchical Concepts

1 code implementation25 Nov 2023 Jonas Jürß, Lucie Charlotte Magister, Pietro Barbiero, Pietro Liò, Nikola Simidjievski

A line of interpretable methods approach this by discovering a small set of relevant concepts as subgraphs in the last GNN layer that together explain the prediction.

Drug Discovery Travel Time Estimation

SHARCS: Shared Concept Space for Explainable Multimodal Learning

1 code implementation1 Jul 2023 Gabriele Dominici, Pietro Barbiero, Lucie Charlotte Magister, Pietro Liò, Nikola Simidjievski

Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling task.

Retrieval

Deep Multiple Instance Learning with Distance-Aware Self-Attention

no code implementations17 May 2023 Georg Wölflein, Lucie Charlotte Magister, Pietro Liò, David J. Harrison, Ognjen Arandjelović

We evaluate our model on a custom MNIST-based MIL dataset that requires the consideration of relative spatial information, as well as on CAMELYON16, a publicly available cancer metastasis detection dataset, where we achieve a test AUROC score of 0. 91.

Cancer Metastasis Detection Multiple Instance Learning +1

GCI: A (G)raph (C)oncept (I)nterpretation Framework

1 code implementation9 Feb 2023 Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio

Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.

Explainable Artificial Intelligence (XAI) Molecular Property Prediction +1

Teaching Small Language Models to Reason

no code implementations16 Dec 2022 Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn

Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets.

GSM8K Knowledge Distillation

Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis

1 code implementation22 Aug 2022 Han Xuanyuan, Pietro Barbiero, Dobrik Georgiev, Lucie Charlotte Magister, Pietro Lió

We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model.

GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks

no code implementations25 Jul 2021 Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, Pietro Liò

Motivated by the aim of providing global explanations, we adapt the well-known Automated Concept-based Explanation approach (Ghorbani et al., 2019) to GNN node and graph classification, and propose GCExplainer.

Graph Classification Node Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.