no code implementations • 23 Aug 2023 • Leander Weber, Jim Berend, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solving a given task.
no code implementations • 30 Nov 2022 • Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin
We further suggest a XAI evaluation framework with which we quantify and compare the effect sof model canonization for various XAI methods in image classification tasks on the Pascal-VOC and ILSVRC2017 datasets, as well as for Visual Question Answering using CLEVR-XAI.
Explainable Artificial Intelligence (XAI) Image Classification +2
no code implementations • CVPR 2023 • Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek
To address shortcomings of this test, we start by observing an experimental gap in the ranking of explanation methods between randomization-based sanity checks [1] and model output faithfulness measures (e. g. [25]).
1 code implementation • 24 Aug 2022 • Keshigeyan Chandrasegaran, Ngoc-Trung Tran, Alexander Binder, Ngai-Man Cheung
Visual counterfeits are increasingly causing an existential conundrum in mainstream media with rapid evolution in neural image synthesis methods.
no code implementations • 15 Mar 2022 • Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek
We conclude that while model improvement based on XAI can have significant beneficial effects even on complex and not easily quantifyable model properties, these methods need to be applied carefully, since their success can vary depending on a multitude of factors, such as the model and dataset used, or the employed explanation method.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 24 Oct 2021 • Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
In this work, we aim to close this gap by studying a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
no code implementations • 25 Jun 2021 • Vignesh Srinivasan, Nils Strodthoff, Jackie Ma, Alexander Binder, Klaus-Robert Müller, Wojciech Samek
Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
no code implementations • 9 Dec 2020 • Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
In this work, we propose a detection strategy to identify adversarial support sets, aimed at destroying the understanding of a few-shot classifier for a certain class.
no code implementations • 11 Nov 2020 • Penny Chong, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
We compare performances in terms of the classification, explanation quality, and outlier detection of our proposed network with other baselines.
no code implementations • 21 Jul 2020 • Lin Geng Foo, Rui En Ho, Jiamei Sun, Alexander Binder
In this work, we propose a two-step post-processing procedure, Split and Expand, that directly improves the conversion of segmentation maps to instances.
1 code implementation • 17 Jul 2020 • Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder
It leverages on the explanation scores, obtained from existing explanation methods when applied to the predictions of FSC models, computed for intermediate feature maps of the models.
Ranked #8 on Cross-Domain Few-Shot on ISIC2018
1 code implementation • arXiv 2020 • Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder
From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.
no code implementations • ECCV 2020 • Jing Yu Koh, Duc Thanh Nguyen, Quang-Trung Truong, Sai-Kit Yeung, Alexander Binder
Fully-automatic execution is the ultimate goal for many Computer Vision applications.
no code implementations • 24 Jan 2020 • Penny Chong, Lukas Ruff, Marius Kloft, Alexander Binder
However, deep SVDD suffers from hypersphere collapse -- also known as mode collapse, if the architecture of the model does not comply with certain architectural constraints, e. g. the removal of bias terms.
1 code implementation • 4 Jan 2020 • Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Alexander Binder
We develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms.
1 code implementation • 18 Dec 2019 • Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.
Explainable Artificial Intelligence (XAI) Model Compression +2
no code implementations • 8 Dec 2019 • Yi Xiang Marcus Tan, Yuval Elovici, Alexander Binder
We investigate to what extent alternative variants of Artificial Neural Networks (ANNs) are susceptible to adversarial attacks.
1 code implementation • 22 Oct 2019 • Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin
In this paper, we focus on a popular and widely used method of XAI, the Layer-wise Relevance Propagation (LRP).
Ranked #1 on Object Detection on SIXray
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +3
no code implementations • 15 Aug 2019 • Miriam Hägele, Philipp Seegerer, Sebastian Lapuschkin, Michael Bockmayr, Wojciech Samek, Frederick Klauschen, Klaus-Robert Müller, Alexander Binder
Deep learning has recently gained popularity in digital pathology due to its high prediction quality.
7 code implementations • ICLR 2020 • Lukas Ruff, Robert A. Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, Marius Kloft
Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets.
no code implementations • 28 May 2019 • Yi Xiang Marcus Tan, Alfonso Iacovazzi, Ivan Homoliak, Yuval Elovici, Alexander Binder
In an attempt to address this gap, we built a set of attacks, which are applications of several generative approaches, to construct adversarial mouse trajectories that bypass authentication models.
1 code implementation • 26 Feb 2019 • Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior.
2 code implementations • 10 Dec 2018 • Ivan Homoliak, Dominik Breitenbacher, Ondrej Hujnak, Pieter Hartel, Alexander Binder, Pawel Szalachowski
The proposed framework consists of four components (i. e., an authenticator, a client, a hardware wallet, and a smart contract), and it provides 2-factor authentication (2FA) performed in two stages of interaction with the blockchain.
Cryptography and Security
no code implementations • ECCV 2018 • Tian Feng, Quang-Trung Truong, Duc Thanh Nguyen, Jing Yu Koh, Lap-Fai Yu, Alexander Binder, Sai-Kit Yeung
Urban zoning enables various applications in land use analysis and urban planning.
1 code implementation • ICML 2018 • Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, Marius Kloft
Despite the great advances made by deep learning in many machine learning problems, there is a relative dearth of deep learning approaches for anomaly detection.
Ranked #32 on Anomaly Detection on One-class CIFAR-10
no code implementations • 28 May 2018 • Alexander Binder, Michael Bockmayr, Miriam Hägele, Stephan Wienert, Daniel Heim, Katharina Hellweg, Albrecht Stenzinger, Laura Parlow, Jan Budczies, Benjamin Goeppert, Denise Treue, Manato Kotani, Masaru Ishii, Manfred Dietel, Andreas Hocke, Carsten Denkert, Klaus-Robert Müller, Frederick Klauschen
Recent advances in cancer research largely rely on new developments in microscopic or molecular profiling techniques offering high level of detail with respect to either spatial or molecular features, but usually not both.
no code implementations • 25 Aug 2017 • Sebastian Lapuschkin, Alexander Binder, Klaus-Robert Müller, Wojciech Samek
Recently, deep neural networks have demonstrated excellent performances in recognizing the age and gender on human face images.
no code implementations • 24 Nov 2016 • Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller
Complex nonlinear models such as deep neural network (DNNs) have become an important tool for image classification, speech recognition, natural language processing, and many other fields of application.
no code implementations • 29 Jun 2016 • Jing Yu Koh, Wojciech Samek, Klaus-Robert Müller, Alexander Binder
We propose a novel strategy for solving this task, when pixel-level annotations are not available, performing it in an almost zero-shot manner by relying on conventional whole image neural net classifiers that were trained using large bounding boxes.
no code implementations • 4 Apr 2016 • Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller, Wojciech Samek
Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e. g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image.
no code implementations • 21 Mar 2016 • Sebastian Bach, Alexander Binder, Klaus-Robert Müller, Wojciech Samek
We present an application of the Layer-wise Relevance Propagation (LRP) algorithm to state of the art deep convolutional neural networks and Fisher Vector classifiers to compare the image perception and prediction strategies of both classifiers with the use of visualized heatmaps.
4 code implementations • 8 Dec 2015 • Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller
Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures.
no code implementations • CVPR 2016 • Sebastian Bach, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek
Fisher Vector classifiers and Deep Neural Networks (DNNs) are popular and successful algorithms for solving image classification problems.
1 code implementation • 21 Sep 2015 • Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller
Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method.
no code implementations • 14 Jun 2015 • Yunwen Lei, Alexander Binder, Ürün Dogan, Marius Kloft
We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure.
no code implementations • NeurIPS 2015 • Yunwen Lei, Ürün Dogan, Alexander Binder, Marius Kloft
This paper studies the generalization performance of multi-class classification algorithms, for which we obtain, for the first time, a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis.
no code implementations • 22 Oct 2013 • Wojciech Samek, Alexander Binder, Klaus-Robert Müller
Combining information from different sources is a common way to improve classification accuracy in Brain-Computer Interfacing (BCI).