Search Results for author: Alexander Meinke

Found 7 papers, 5 papers with code

Tell, don't show: Declarative facts influence how LLMs generalize

no code implementations12 Dec 2023 Alexander Meinke, Owain Evans

Nevertheless, the effect of declarative statements on model likelihoods is small in absolute terms and increases surprisingly little with model size (i. e. from 330 million to 175 billion parameters).

Fairness

Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities

1 code implementation20 Jun 2022 Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein

Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective

no code implementations29 Sep 2021 Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein

When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Provably Robust Detection of Out-of-distribution Data (almost) for free

1 code implementation8 Jun 2021 Alexander Meinke, Julian Bitterwolf, Matthias Hein

The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty.

Out of Distribution (OOD) Detection

Certifiably Adversarially Robust Detection of Out-of-Distribution Data

2 code implementations NeurIPS 2020 Julian Bitterwolf, Alexander Meinke, Matthias Hein

Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.

Adversarial Robustness Out of Distribution (OOD) Detection

Adversarial Robustness on In- and Out-Distribution Improves Explainability

1 code implementation ECCV 2020 Maximilian Augustin, Alexander Meinke, Matthias Hein

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.

Adversarial Robustness Image Classification

Towards neural networks that provably know when they don't know

1 code implementation ICLR 2020 Alexander Meinke, Matthias Hein

It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data.

Out-of-Distribution Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.