2 code implementations • 16 Nov 2022 • Marc Fischer, Alexander Bartler, Bin Yang
As such, fine-tuning a model to a downstream task in a parameter-efficient but effective way, e. g. for a new set of classes in the case of semantic segmentation, is of increasing importance.
1 code implementation • 18 May 2022 • Alexander Bartler, Florian Bender, Felix Wiewel, Bin Yang
Nowadays, deep neural networks outperform humans in many tasks.
no code implementations • 6 Sep 2021 • Nico Reick, Felix Wiewel, Alexander Bartler, Bin Yang
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
no code implementations • 5 May 2021 • Robert A. Marsden, Alexander Bartler, Mario Döbler, Bin Yang
To avoid the costly annotation of training data for unseen domains, unsupervised domain adaptation (UDA) attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
Ranked #20 on Synthetic-to-Real Translation on SYNTHIA-to-Cityscapes
1 code implementation • 30 Mar 2021 • Alexander Bartler, Andre Bühler, Felix Wiewel, Mario Döbler, Bin Yang
By minimizing the self-supervised loss, we learn task-specific model parameters for different tasks.
no code implementations • 8 Mar 2021 • Yiwen Liao, Alexander Bartler, Bin Yang
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
no code implementations • ICLR 2019 • Alexander Bartler, Felix Wiewel, Bin Yang, Lukas Mauch
In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations.