1 code implementation • 13 Oct 2022 • Nils Feldhus, Leonhard Hennig, Maximilian Dustin Nasert, Christopher Ebert, Robert Schwarzenberg, Sebastian Möller
Saliency maps can explain a neural model's predictions by identifying important input features.
2 code implementations • EMNLP (ACL) 2021 • Nils Feldhus, Robert Schwarzenberg, Sebastian Möller
To facilitate research, we present Thermostat which consists of a large collection of model explanations and accompanying analysis tools.
1 code implementation • SEMEVAL 2020 • Marc Hübner, Christoph Alt, Robert Schwarzenberg, Leonhard Hennig
Definition Extraction systems are a valuable knowledge source for both humans and algorithms.
2 code implementations • EMNLP (BlackboxNLP) 2021 • Robert Schwarzenberg, Nils Feldhus, Sebastian Möller
Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers.
1 code implementation • 21 Jul 2020 • Robert Schwarzenberg, Steffen Castle
In this work, we combine the two methods into a new method, Pattern-Guided Integrated Gradients (PGIG).
3 code implementations • 7 Jul 2020 • Karolina Zaczynska, Nils Feldhus, Robert Schwarzenberg, Aleksandra Gabryszak, Sebastian Möller
Most of the studies were conducted for the English language, however.
1 code implementation • LREC 2020 • Dmitrii Aksenov, Julián Moreno-Schneider, Peter Bourgonje, Robert Schwarzenberg, Leonhard Hennig, Georg Rehm
The results of our models are compared to a baseline and the state-of-the-art models on the CNN/Daily Mail dataset.
1 code implementation • WS 2019 • Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig
Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.
1 code implementation • WS 2019 • Robert Schwarzenberg, Lisa Raithel, David Harbecke
Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models.
1 code implementation • NAACL 2019 • Robert Schwarzenberg, David Harbecke, Vivien Macketanz, Eleftherios Avramidis, Sebastian Möller
Evaluating translation models is a trade-off between effort and detail.
1 code implementation • WS 2018 • David Harbecke, Robert Schwarzenberg, Christoph Alt
PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks.
no code implementations • 17 Apr 2014 • Robert Schwarzenberg, Bernd Freisleben, Christopher Nimsky, Jan Egger
The Cube-Cut algorithm generates a directed graph with two terminal nodes (s-t-network), where the nodes of the graph correspond to a cubic-shaped subset of the image's voxels.