Search Results for author: Julian Bitterwolf

Found 7 papers, 6 papers with code

In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation

1 code implementation1 Jun 2023 Julian Bitterwolf, Maximilian Müller, Matthias Hein

The OOD detection performance when the in-distribution (ID) is ImageNet-1K is commonly being tested on a small range of test OOD datasets.

 Ranked #1 on Out-of-Distribution Detection on ImageNet-1k vs NINCO (using extra training data)

Open Set Learning Out-of-Distribution Detection +1

Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities

1 code implementation20 Jun 2022 Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein

Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective

no code implementations29 Sep 2021 Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein

When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Provably Robust Detection of Out-of-distribution Data (almost) for free

1 code implementation8 Jun 2021 Alexander Meinke, Julian Bitterwolf, Matthias Hein

The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty.

Out of Distribution (OOD) Detection

Certifiably Adversarially Robust Detection of Out-of-Distribution Data

2 code implementations NeurIPS 2020 Julian Bitterwolf, Alexander Meinke, Matthias Hein

Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.

Adversarial Robustness Out of Distribution (OOD) Detection

A simple way to make neural networks robust against diverse image corruptions

3 code implementations ECCV 2020 Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.

Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

1 code implementation CVPR 2019 Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf

We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.