1 code implementation • 24 Jan 2024 • Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer
Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.
1 code implementation • 16 Mar 2023 • Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, Julia E. Vogt
Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables.
no code implementations • 17 Jun 2022 • Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal
As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.
no code implementations • NeurIPS Workshop ICBINB 2021 • Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, Julia E. Vogt
Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data.
1 code implementation • ICLR 2021 • Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt
Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research.
1 code implementation • NeurIPS 2020 • Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt
Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena.