1 code implementation • 18 Jun 2021 • Martine Toering, Ioannis Gatopoulos, Maarten Stol, Vincent Tao Hu
Instance-level contrastive learning techniques, which rely on data augmentation and a contrastive loss function, have found great success in the domain of visual representation learning.
Ranked #3 on Self-supervised Video Retrieval on HMDB51
no code implementations • 3 Nov 2020 • Daniel Lutscher, Ali el Hassouni, Maarten Stol, Mark Hoogendoorn
Finding well-defined clusters in data represents a fundamental challenge for many data-driven applications, and largely depends on good data representation.
no code implementations • 5 Sep 2020 • Andrei Apostol, Maarten Stol, Patrick Forré
Modern neural networks, although achieving state-of-the-art results on many tasks, tend to have a large number of parameters, which increases training time and resource usage.
1 code implementation • 9 Jun 2020 • Ioannis Gatopoulos, Maarten Stol, Jakub M. Tomczak
The framework of variational autoencoders (VAEs) provides a principled method for jointly learning latent-variable models and corresponding inference models.
Ranked #62 on Image Generation on CIFAR-10 (bits/dimension metric)
1 code implementation • 1 Jun 2020 • Stijn Verdenius, Maarten Stol, Patrick Forré
With the introduction of SNIP [arXiv:1810. 02340v2], it has been demonstrated that modern neural networks can effectively be pruned before training.