Search Results for author: Robert Geirhos

Found 27 papers, 17 papers with code

Are Vision Language Models Texture or Shape Biased and Can We Steer Them?

1 code implementation14 Mar 2024 Paul Gavrikov, Jovita Lukasik, Steffen Jung, Robert Geirhos, Bianca Lamm, Muhammad Jehanzeb Mirza, Margret Keuper, Janis Keuper

If text does indeed influence visual biases, this suggests that we may be able to steer visual biases not just through visual input but also through language: a hypothesis that we confirm through extensive experiments.

Image Captioning Image Classification +3

Neither hype nor gloom do DNNs justice

no code implementations8 Dec 2023 Felix A. Wichmann, Simon Kornblith, Robert Geirhos

Neither the hype exemplified in some exaggerated claims about deep neural networks (DNNs), nor the gloom expressed by Bowers et al. do DNNs as models in vision science justice: DNNs rapidly evolve, and today's limitations are often tomorrow's successes.

Intriguing properties of generative classifiers

1 code implementation28 Sep 2023 Priyank Jaini, Kevin Clark, Robert Geirhos

What is the best paradigm to recognize objects -- discriminative inference (fast but potentially prone to shortcut learning) or using a generative model (slow but potentially more robust)?

Object Recognition

Don't trust your eyes: on the (un)reliability of feature visualizations

1 code implementation7 Jun 2023 Robert Geirhos, Roland S. Zimmermann, Blair Bilodeau, Wieland Brendel, Been Kim

Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability.

Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?

no code implementations26 May 2023 Felix A. Wichmann, Robert Geirhos

Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision due to their remarkable successes in tasks like object classification and segmentation.

Object Object Recognition

Beyond neural scaling laws: beating power law scaling via data pruning

3 code implementations29 Jun 2022 Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos

Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning.

Benchmarking

The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks

1 code implementation20 May 2022 Lukas S. Huber, Robert Geirhos, Felix A. Wichmann

Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1. 3M images) perform poorly on distorted images.

Object Object Recognition +1

Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)

1 code implementation12 Oct 2021 Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 0% "trivial" and 11. 5% "impossible" images (beyond label errors).

Inductive Bias

Trivial or Impossible --- dichotomous data difficulty masks model differences (on ImageNet and beyond)

no code implementations ICLR 2022 Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 0% ``trivial'' and 11. 5% ``impossible'' images (beyond label errors).

Inductive Bias

ImageNet suffers from dichotomous data difficulty

no code implementations NeurIPS Workshop ImageNet_PPF 2021 Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

We find that the ImageNet validation set suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 3% "trivial" and 11. 3% "impossible" images.

Inductive Bias

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

1 code implementation NeurIPS 2021 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel

A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.

Explainable artificial intelligence

Partial success in closing the gap between human and machine vision

1 code implementation NeurIPS 2021 Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.

Image Classification

Exemplary natural images explain CNN activations better than synthetic feature visualizations

no code implementations ICLR 2021 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.

Informativeness

Natural Images are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations

no code implementations NeurIPS Workshop SVRHM 2020 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.

Informativeness

Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency

1 code implementation NeurIPS 2020 Robert Geirhos, Kristof Meding, Felix A. Wichmann

Here we introduce trial-by-trial error consistency, a quantitative analysis for measuring whether two decision making systems systematically make errors on the same inputs.

Decision Making Object Recognition

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

Benchmarking

Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

4 code implementations17 Jul 2019 Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel

The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving.

Autonomous Driving Benchmarking +5

Comparison-Based Framework for Psychophysics: Lab versus Crowdsourcing

no code implementations17 May 2019 Siavash Haghiri, Patricia Rubisch, Robert Geirhos, Felix Wichmann, Ulrike Von Luxburg

In this paper we study whether the use of comparison-based (ordinal) data, combined with machine learning algorithms, can boost the reliability of crowdsourcing studies for psychophysics, such that they can achieve performance close to a lab experiment.

BIG-bench Machine Learning

Generalisation in humans and deep neural networks

2 code implementations NeurIPS 2018 Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann

We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.

Object Recognition

Comparing deep neural networks against humans: object recognition when the signal gets weaker

1 code implementation21 Jun 2017 Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann

In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.

General Classification Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.