Search Results for author: Christin Seifert

Found 26 papers, 15 papers with code

Prototype-based Interpretable Breast Cancer Prediction Models: Analysis and Challenges

1 code implementation29 Mar 2024 Shreyasi Pathak, Jörg Schlötterer, Jeroen Veltman, Jeroen Geerdink, Maurice van Keulen, Christin Seifert

Specifically, we apply three state-of-the-art prototype-based models, ProtoPNet, BRAIxProtoPNet++ and PIP-Net on mammography images for breast cancer prediction and evaluate these models w. r. t.

Explainable Models

PIPNet3D: Interpretable Detection of Alzheimer in MRI Scans

no code implementations27 Mar 2024 Lisa Anita De Santi, Jörg Schlötterer, Michael Scheschenja, Joel Wessendorf, Meike Nauta, Vincenzo Positano, Christin Seifert

Information from neuroimaging examinations (CT, MRI) is increasingly used to support diagnoses of dementia, e. g., Alzheimer's disease.

Feature Engineering

A Second Look on BASS -- Boosting Abstractive Summarization with Unified Semantic Graphs -- A Replication Study

no code implementations5 Mar 2024 Osman Alperen Koraş, Jörg Schlötterer, Christin Seifert

We present a detailed replication study of the BASS framework, an abstractive summarization system based on the notion of Unified Semantic Graphs.

Abstractive Text Summarization

The Queen of England is not England's Queen: On the Lack of Factual Coherency in PLMs

1 code implementation2 Feb 2024 Paul Youssef, Jörg Schlötterer, Christin Seifert

In this work, we consider a complementary aspect, namely the coherency of factual knowledge in PLMs, i. e., how often can PLMs predict the subject entity given its initial prediction of the object entity.

Retrieval

Explainable Bayesian Optimization

1 code implementation24 Jan 2024 Tanmay Chakraborty, Christin Seifert, Christian Wirth

In industry, Bayesian optimization (BO) is widely applied in the human-AI collaborative parameter tuning of cyber-physical systems.

Bayesian Optimization Hyperparameter Optimization +1

Feature Attribution Explanations for Spiking Neural Networks

1 code implementation2 Nov 2023 Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert

We present \textit{Temporal Spike Attribution} (TSA), a local explanation method for SNNs.

Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models

no code implementations25 Oct 2023 Paul Youssef, Osman Alperen Koraş, Meijie Li, Jörg Schlötterer, Christin Seifert

Our contributions are: (1) We propose a categorization scheme for factual probing methods that is based on how their inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of the datasets used for factual probing; (3) We synthesize insights about knowledge retention and prompt optimization in PLMs, analyze obstacles to adopting PLMs as knowledge bases and outline directions for future work.

Knowledge Probing World Knowledge

Weakly Supervised Learning for Breast Cancer Prediction on Mammograms in Realistic Settings

1 code implementation19 Oct 2023 Shreyasi Pathak, Jörg Schlötterer, Jeroen Geerdink, Onno Dirk Vijlbrief, Maurice van Keulen, Christin Seifert

We show that two-level MIL can be applied in realistic clinical settings where only case labels, and a variable number of images per patient are available.

Weakly-supervised Learning

Is Last Layer Re-Training Truly Sufficient for Robustness to Spurious Correlations?

no code implementations1 Aug 2023 Phuong Quynh Le, Jörg Schlötterer, Christin Seifert

Models trained with empirical risk minimization (ERM) are known to learn to rely on spurious features, i. e., their prediction is based on undesired auxiliary features which are strongly correlated with class labels but lack causal reasoning.

The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers

1 code implementation26 Jul 2023 Meike Nauta, Christin Seifert

Interpretable part-prototype models are computer vision models that are explainable by design.

Guidance in Radiology Report Summarization: An Empirical Evaluation and Error Analysis

1 code implementation24 Jul 2023 Jan Trienes, Paul Youssef, Jörg Schlötterer, Christin Seifert

Automatically summarizing radiology reports into a concise impression can reduce the manual burden of clinicians and improve the consistency of reporting.

Abstractive Text Summarization

Interpreting and Correcting Medical Image Classification with PIP-Net

1 code implementation19 Jul 2023 Meike Nauta, Johannes H. Hegeman, Jeroen Geerdink, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.

Decision Making Image Classification +2

PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification

1 code implementation CVPR 2023 Meike Nauta, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

Driven by the principle of explainability-by-design, we introduce PIP-Net (Patch-based Intuitive Prototypes Network): an interpretable image classification model that learns prototypical parts in a self-supervised fashion which correlate better with human vision.

Decision Making Image Classification

Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent

no code implementations6 Sep 2022 Van Bach Nguyen, Jörg Schlötterer, Christin Seifert

In this work, we show how to incorporate XAI in a conversational agent, using a standard design for the agent comprising natural language understanding and generation components.

Explainable Artificial Intelligence (XAI) Natural Language Understanding

Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers

no code implementations11 Mar 2022 Stefan Haller, Adina Aldea, Christin Seifert, Nicola Strisciuglio

We complement previous surveys by providing a comprehensive analysis of recently published methods that deploy deep learning approaches.

Representation Learning Word Embeddings

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI

no code implementations20 Jan 2022 Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Neural Prototype Trees for Interpretable Fine-grained Image Recognition

1 code implementation CVPR 2021 Meike Nauta, Ron van Bree, Christin Seifert

We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition.

Decision Making Fine-Grained Image Recognition +1

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

1 code implementation5 Nov 2020 Meike Nauta, Annemarie Jutte, Jesper Provoost, Christin Seifert

By explaining such 'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.

Classification General Classification

Automated Retrieval of ATT&CK Tactics and Techniques for Cyber Threat Reports

no code implementations29 Apr 2020 Valentine Legoy, Marco Caselli, Christin Seifert, Andreas Peter

Over the last years, threat intelligence sharing has steadily grown, leading cybersecurity professionals to access increasingly larger amounts of heterogeneous data.

Retrieval

How model accuracy and explanation fidelity influence user trust

no code implementations26 Jul 2019 Andrea Papenmeier, Gwenn Englebienne, Christin Seifert

We also found that users cannot be tricked by high-fidelity explanations into having trust for a bad classifier.

BIG-bench Machine Learning Fairness +1

Causal Discovery with Attention-Based Convolutional Neural Networks

1 code implementation Machine Learning and Knowledge Extraction 2019 Meike Nauta, Doina Bucur, Christin Seifert

We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data.

Causal Discovery Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.