no code implementations • 27 Mar 2024 • Lisa Anita De Santi, Jörg Schlötterer, Michael Scheschenja, Joel Wessendorf, Meike Nauta, Vincenzo Positano, Christin Seifert
Information from neuroimaging examinations (CT, MRI) is increasingly used to support diagnoses of dementia, e. g., Alzheimer's disease.
1 code implementation • 2 Nov 2023 • Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert
We present \textit{Temporal Spike Attribution} (TSA), a local explanation method for SNNs.
no code implementations • 12 Oct 2023 • Una M. Kelly, Meike Nauta, Lu Liu, Luuk J. Spreeuwers, Raymond N. J. Veldhuis
In a recent paper, we introduced a \emph{worst-case} upper bound on how challenging morphing attacks can be for an FR system.
1 code implementation • 26 Jul 2023 • Meike Nauta, Christin Seifert
Interpretable part-prototype models are computer vision models that are explainable by design.
1 code implementation • 19 Jul 2023 • Meike Nauta, Johannes H. Hegeman, Jeroen Geerdink, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.
1 code implementation • CVPR 2023 • Meike Nauta, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
Driven by the principle of explainability-by-design, we introduce PIP-Net (Patch-based Intuitive Prototypes Network): an interpretable image classification model that learns prototypical parts in a self-supervised fashion which correlate better with human vision.
no code implementations • 20 Jan 2022 • Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • CVPR 2021 • Meike Nauta, Ron van Bree, Christin Seifert
We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition.
1 code implementation • 5 Nov 2020 • Meike Nauta, Annemarie Jutte, Jesper Provoost, Christin Seifert
By explaining such 'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.
1 code implementation • Machine Learning and Knowledge Extraction 2019 • Meike Nauta, Doina Bucur, Christin Seifert
We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data.