1 code implementation • 1 Jun 2023 • Julian Bitterwolf, Maximilian Müller, Matthias Hein
The OOD detection performance when the in-distribution (ID) is ImageNet-1K is commonly being tested on a small range of test OOD datasets.
Ranked #1 on Out-of-Distribution Detection on ImageNet-1k vs NINCO (using extra training data)
1 code implementation • 20 Jun 2022 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 29 Sep 2021 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 8 Jun 2021 • Alexander Meinke, Julian Bitterwolf, Matthias Hein
The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty.
2 code implementations • NeurIPS 2020 • Julian Bitterwolf, Alexander Meinke, Matthias Hein
Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.
3 code implementations • ECCV 2020 • Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.
1 code implementation • CVPR 2019 • Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.