Search Results for author: Maximilian Augustin

Found 10 papers, 6 papers with code

Analyzing and Explaining Image Classifiers via Diffusion Guidance

no code implementations29 Nov 2023 Maximilian Augustin, Yannic Neuhaus, Matthias Hein

While deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e. g. via spurious features, call into question how reliably these classifiers work in the wild.

counterfactual Image Classification +1

Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet

1 code implementation ICCV 2023 Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein

In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class.

Spurious Features Everywhere -- Large-Scale Detection of Harmful Spurious Features in ImageNet

1 code implementation9 Dec 2022 Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein

In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class.

Diffusion Visual Counterfactual Explanations

1 code implementation21 Oct 2022 Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein

Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification.

counterfactual Image Classification

Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities

1 code implementation20 Jun 2022 Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein

Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Sparse Visual Counterfactual Explanations in Image Space

1 code implementation16 May 2022 Valentyn Boreiko, Maximilian Augustin, Francesco Croce, Philipp Berens, Matthias Hein

Visual counterfactual explanations (VCEs) in image space are an important tool to understand decisions of image classifiers as they show under which changes of the image the decision of the classifier would change.

counterfactual

Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective

no code implementations29 Sep 2021 Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein

When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

The Needle in the haystack: Out-distribution aware Self-training in an Open-World Setting

no code implementations29 Sep 2021 Maximilian Augustin, Matthias Hein

Traditional semi-supervised learning (SSL) has focused on the closed world assumption where all unlabeled samples are task-related.

Out-of-Distribution Detection Self-Learning

Out-distribution aware Self-training in an Open World Setting

no code implementations21 Dec 2020 Maximilian Augustin, Matthias Hein

The goal of this paper is to leverage unlabeled data in an open world setting to further improve prediction performance.

Adversarial Robustness on In- and Out-Distribution Improves Explainability

1 code implementation ECCV 2020 Maximilian Augustin, Alexander Meinke, Matthias Hein

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.

Adversarial Robustness Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.