no code implementations • 29 Nov 2023 • Maximilian Augustin, Yannic Neuhaus, Matthias Hein
While deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e. g. via spurious features, call into question how reliably these classifiers work in the wild.
1 code implementation • ICCV 2023 • Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein
In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class.
1 code implementation • 9 Dec 2022 • Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein
In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class.
1 code implementation • 21 Oct 2022 • Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein
Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification.
1 code implementation • 20 Jun 2022 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 16 May 2022 • Valentyn Boreiko, Maximilian Augustin, Francesco Croce, Philipp Berens, Matthias Hein
Visual counterfactual explanations (VCEs) in image space are an important tool to understand decisions of image classifiers as they show under which changes of the image the decision of the classifier would change.
no code implementations • 29 Sep 2021 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 29 Sep 2021 • Maximilian Augustin, Matthias Hein
Traditional semi-supervised learning (SSL) has focused on the closed world assumption where all unlabeled samples are task-related.
no code implementations • 21 Dec 2020 • Maximilian Augustin, Matthias Hein
The goal of this paper is to leverage unlabeled data in an open world setting to further improve prediction performance.
1 code implementation • ECCV 2020 • Maximilian Augustin, Alexander Meinke, Matthias Hein
Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.