no code implementations • 12 Dec 2023 • Alexander Meinke, Owain Evans
Nevertheless, the effect of declarative statements on model likelihoods is small in absolute terms and increases surprisingly little with model size (i. e. from 330 million to 175 billion parameters).
1 code implementation • 20 Jun 2022 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 29 Sep 2021 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 8 Jun 2021 • Alexander Meinke, Julian Bitterwolf, Matthias Hein
The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty.
2 code implementations • NeurIPS 2020 • Julian Bitterwolf, Alexander Meinke, Matthias Hein
Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.
1 code implementation • ECCV 2020 • Maximilian Augustin, Alexander Meinke, Matthias Hein
Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.
1 code implementation • ICLR 2020 • Alexander Meinke, Matthias Hein
It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data.