no code implementations • ECCV 2020 • Matthias Tangemann, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Where people look when watching videos is believed to be heavily influenced by temporal patterns.
1 code implementation • NeurIPS 2023 • Ori Press, Steffen Schneider, Matthias Kümmerer, Matthias Bethge
Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time.
no code implementations • 29 Dec 2021 • Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard Zemel, Matthias Bethge
Exploiting such correlations may increase predictive performance on noisy data; however, often correlations are not robust (e. g., they may change between domains, datasets, or applications) and models that exploit them do not generalize when correlations shift.
1 code implementation • 13 Oct 2021 • Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf
Learning generative object models from unlabelled videos is a long standing problem and required for causal scene modeling.
2 code implementations • ICCV 2021 • Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge
Since 2014 transfer learning has become the key driver for the improvement of spatial saliency prediction; however, with stagnant progress in the last 3-5 years.
no code implementations • 24 Feb 2021 • Matthias Kümmerer, Matthias Bethge
The last years have seen a surge in models predicting the scanpaths of fixations made by humans when viewing images.
1 code implementation • NeurIPS 2019 • Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge
We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.
no code implementations • 18 Dec 2017 • Leon A. Gatys, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Thus, manipulating fixation patterns to guide human attention is an exciting challenge in digital image processing.
no code implementations • ECCV 2018 • Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Here we show that no single saliency map can perform well under all metrics.
no code implementations • 5 Oct 2016 • Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Here we present DeepGaze II, a model that predicts where people look in images.
1 code implementation • 4 Nov 2014 • Matthias Kümmerer, Lucas Theis, Matthias Bethge
Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations.
no code implementations • 26 Sep 2014 • Matthias Kümmerer, Thomas Wallis, Matthias Bethge
Within the set of the many complex factors driving gaze placement, the properities of an image that are associated with fixations under free viewing conditions have been studied extensively.