no code implementations • 4 May 2024 • Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert
Knowledge editing techniques (KEs) can update language models' obsolete or inaccurate knowledge learned from pre-training.
no code implementations • 29 Apr 2024 • Jorn-Jan van de Beld, Shreyasi Pathak, Jeroen Geerdink, Johannes H. Hegeman, Christin Seifert
In this work, we develop a multimodal deep-learning model for post-operative mortality prediction using pre-operative and per-operative data from elderly hip fracture patients.
no code implementations • 26 Apr 2024 • Van Bach Nguyen, Jörg Schlötterer, Christin Seifert
Counterfactual text generation aims to minimally change a text, such that it is classified differently.
no code implementations • 8 Apr 2024 • Ahmad Idrissi-Yaghir, Amin Dada, Henning Schäfer, Kamyar Arzideh, Giulia Baldini, Jan Trienes, Max Hasin, Jeanette Bewersdorff, Cynthia S. Schmidt, Marie Bauer, Kaleb E. Smith, Jiang Bian, Yonghui Wu, Jörg Schlötterer, Torsten Zesch, Peter A. Horn, Christin Seifert, Felix Nensa, Jens Kleesiek, Christoph M. Friedrich
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa.
1 code implementation • 29 Mar 2024 • Shreyasi Pathak, Jörg Schlötterer, Jeroen Veltman, Jeroen Geerdink, Maurice van Keulen, Christin Seifert
Specifically, we apply three state-of-the-art prototype-based models, ProtoPNet, BRAIxProtoPNet++ and PIP-Net on mammography images for breast cancer prediction and evaluate these models w. r. t.
no code implementations • 27 Mar 2024 • Lisa Anita De Santi, Jörg Schlötterer, Michael Scheschenja, Joel Wessendorf, Meike Nauta, Vincenzo Positano, Christin Seifert
Information from neuroimaging examinations (CT, MRI) is increasingly used to support diagnoses of dementia, e. g., Alzheimer's disease.
no code implementations • 5 Mar 2024 • Osman Alperen Koraş, Jörg Schlötterer, Christin Seifert
We present a detailed replication study of the BASS framework, an abstractive summarization system based on the notion of Unified Semantic Graphs.
1 code implementation • 2 Feb 2024 • Paul Youssef, Jörg Schlötterer, Christin Seifert
In this work, we consider a complementary aspect, namely the coherency of factual knowledge in PLMs, i. e., how often can PLMs predict the subject entity given its initial prediction of the object entity.
1 code implementation • 29 Jan 2024 • Jan Trienes, Sebastian Joseph, Jörg Schlötterer, Christin Seifert, Kyle Lo, Wei Xu, Byron C. Wallace, Junyi Jessy Li
Text simplification aims to make technical texts more accessible to laypeople but often results in deletion of information and vagueness.
1 code implementation • 24 Jan 2024 • Tanmay Chakraborty, Christin Seifert, Christian Wirth
In industry, Bayesian optimization (BO) is widely applied in the human-AI collaborative parameter tuning of cyber-physical systems.
1 code implementation • 2 Nov 2023 • Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert
We present \textit{Temporal Spike Attribution} (TSA), a local explanation method for SNNs.
no code implementations • 25 Oct 2023 • Paul Youssef, Osman Alperen Koraş, Meijie Li, Jörg Schlötterer, Christin Seifert
Our contributions are: (1) We propose a categorization scheme for factual probing methods that is based on how their inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of the datasets used for factual probing; (3) We synthesize insights about knowledge retention and prompt optimization in PLMs, analyze obstacles to adopting PLMs as knowledge bases and outline directions for future work.
1 code implementation • 19 Oct 2023 • Shreyasi Pathak, Jörg Schlötterer, Jeroen Geerdink, Onno Dirk Vijlbrief, Maurice van Keulen, Christin Seifert
We show that two-level MIL can be applied in realistic clinical settings where only case labels, and a variable number of images per patient are available.
no code implementations • 1 Aug 2023 • Phuong Quynh Le, Jörg Schlötterer, Christin Seifert
Models trained with empirical risk minimization (ERM) are known to learn to rely on spurious features, i. e., their prediction is based on undesired auxiliary features which are strongly correlated with class labels but lack causal reasoning.
1 code implementation • 26 Jul 2023 • Meike Nauta, Christin Seifert
Interpretable part-prototype models are computer vision models that are explainable by design.
1 code implementation • 24 Jul 2023 • Jan Trienes, Paul Youssef, Jörg Schlötterer, Christin Seifert
Automatically summarizing radiology reports into a concise impression can reduce the manual burden of clinicians and improve the consistency of reporting.
1 code implementation • 19 Jul 2023 • Meike Nauta, Johannes H. Hegeman, Jeroen Geerdink, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.
1 code implementation • CVPR 2023 • Meike Nauta, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
Driven by the principle of explainability-by-design, we introduce PIP-Net (Patch-based Intuitive Prototypes Network): an interpretable image classification model that learns prototypical parts in a self-supervised fashion which correlate better with human vision.
1 code implementation • Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR) 2022 • Jan Trienes, Jörg Schlötterer, Hans-Ulrich Schildhaus, Christin Seifert
Automatic text simplification can help patients to better understand their own clinical notes.
no code implementations • 6 Sep 2022 • Van Bach Nguyen, Jörg Schlötterer, Christin Seifert
In this work, we show how to incorporate XAI in a conversational agent, using a standard design for the agent comprising natural language understanding and generation components.
Explainable Artificial Intelligence (XAI) Natural Language Understanding
no code implementations • 11 Mar 2022 • Stefan Haller, Adina Aldea, Christin Seifert, Nicola Strisciuglio
We complement previous surveys by providing a comprehensive analysis of recently published methods that deploy deep learning approaches.
no code implementations • 20 Jan 2022 • Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 13 Jan 2022 • Hendrik F. R. Schmidt, Jörg Schlötterer, Marcel Bargull, Enrico Nasca, Ryan Aydelott, Christin Seifert, Folker Meyer
AI/Computing at scale is a difficult problem, especially in a health care setting.
1 code implementation • CVPR 2021 • Meike Nauta, Ron van Bree, Christin Seifert
We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition.
1 code implementation • 5 Nov 2020 • Meike Nauta, Annemarie Jutte, Jesper Provoost, Christin Seifert
By explaining such 'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.
no code implementations • 29 Apr 2020 • Valentine Legoy, Marco Caselli, Christin Seifert, Andreas Peter
Over the last years, threat intelligence sharing has steadily grown, leading cybersecurity professionals to access increasingly larger amounts of heterogeneous data.
1 code implementation • 16 Jan 2020 • Jan Trienes, Dolf Trieschnigg, Christin Seifert, Djoerd Hiemstra
We test the generalizability of three de-identification methods across languages and domains.
no code implementations • 26 Jul 2019 • Andrea Papenmeier, Gwenn Englebienne, Christin Seifert
We also found that users cannot be tricked by high-fidelity explanations into having trust for a bad classifier.
1 code implementation • Machine Learning and Knowledge Extraction 2019 • Meike Nauta, Doina Bucur, Christin Seifert
We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data.