no code implementations • 6 May 2024 • Leonhard Hennicke, Christian Medeiros Adriano, Holger Giese, Jan Mathias Koehler, Lukas Schott
Building upon our insights that mainly later layers are responsible for the drop, we investigate the data-efficiency of fine-tuning a synthetically trained model with real data applied to only those last layers.
no code implementations • 8 Nov 2023 • Cathrin Elich, Lukas Kirchdorfer, Jan M. Köhler, Lukas Schott
Second, the notion of gradient conflicts has often been phrased as a specific problem in MTL.
1 code implementation • 6 Oct 2022 • Martin Bjerke, Lukas Schott, Kristopher T. Jensen, Claudia Battistin, David A. Klindt, Benjamin A. Dunn
These innovations lead to more interpretable models of neural population activity that train well and perform better even on mixtures of complex latent manifolds.
no code implementations • 1 Oct 2021 • Roland S. Zimmermann, Lukas Schott, Yang song, Benjamin A. Dunn, David A. Klindt
In this work, we investigate score-based generative models as classifiers for natural images.
1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
1 code implementation • ICLR 2021 • David Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan Paiton
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos.
Ranked #1 on Disentanglement on Natural Sprites
3 code implementations • ECCV 2020 • Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.
3 code implementations • ICLR 2019 • Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.
no code implementations • ICCV 2017 • Steffen Wolf, Lukas Schott, Ullrich Köthe, Fred Hamprecht
Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm.
no code implementations • 19 Nov 2015 • Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah
The study is performed on several types of deep learning architectures and we evaluate the performance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings.