Search Results for author: Meike Nauta

Found 10 papers, 7 papers with code

PIPNet3D: Interpretable Detection of Alzheimer in MRI Scans

no code implementations27 Mar 2024 Lisa Anita De Santi, Jörg Schlötterer, Michael Scheschenja, Joel Wessendorf, Meike Nauta, Vincenzo Positano, Christin Seifert

Information from neuroimaging examinations (CT, MRI) is increasingly used to support diagnoses of dementia, e. g., Alzheimer's disease.

Feature Engineering

Feature Attribution Explanations for Spiking Neural Networks

1 code implementation2 Nov 2023 Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert

We present \textit{Temporal Spike Attribution} (TSA), a local explanation method for SNNs.

Worst-Case Morphs using Wasserstein ALI and Improved MIPGAN

no code implementations12 Oct 2023 Una M. Kelly, Meike Nauta, Lu Liu, Luuk J. Spreeuwers, Raymond N. J. Veldhuis

In a recent paper, we introduced a \emph{worst-case} upper bound on how challenging morphing attacks can be for an FR system.

Face Recognition MORPH

The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers

1 code implementation26 Jul 2023 Meike Nauta, Christin Seifert

Interpretable part-prototype models are computer vision models that are explainable by design.

Interpreting and Correcting Medical Image Classification with PIP-Net

1 code implementation19 Jul 2023 Meike Nauta, Johannes H. Hegeman, Jeroen Geerdink, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.

Decision Making Image Classification +2

PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification

1 code implementation CVPR 2023 Meike Nauta, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

Driven by the principle of explainability-by-design, we introduce PIP-Net (Patch-based Intuitive Prototypes Network): an interpretable image classification model that learns prototypical parts in a self-supervised fashion which correlate better with human vision.

Decision Making Image Classification

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI

no code implementations20 Jan 2022 Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Neural Prototype Trees for Interpretable Fine-grained Image Recognition

1 code implementation CVPR 2021 Meike Nauta, Ron van Bree, Christin Seifert

We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition.

Decision Making Fine-Grained Image Recognition +1

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

1 code implementation5 Nov 2020 Meike Nauta, Annemarie Jutte, Jesper Provoost, Christin Seifert

By explaining such 'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.

Classification General Classification

Causal Discovery with Attention-Based Convolutional Neural Networks

1 code implementation Machine Learning and Knowledge Extraction 2019 Meike Nauta, Doina Bucur, Christin Seifert

We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data.

Causal Discovery Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.