no code implementations • 8 Apr 2023 • Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen
In geospatial planning, it is often essential to represent objects in a vectorized format, as this format easily translates to downstream tasks such as web development, graphics, or design.
no code implementations • 28 Mar 2023 • Kilian Zepf, Eike Petersen, Jes Frellsen, Aasa Feragen
Segmentation uncertainty models predict a distribution over plausible segmentations for a given input, which they learn from the annotator variation in the training set.
no code implementations • 23 Mar 2023 • Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Aasa Feragen, Frederik Warburg
To ensure robustness to such incorrect segmentations, we propose Laplacian Segmentation Networks (LSN) that jointly model epistemic (model) and aleatoric (data) uncertainty in image segmentation.
no code implementations • 20 Mar 2023 • Maxim Khomiakov, Alejandro Valverde Mahou, Alba Reinders Sánchez, Jes Frellsen, Michael Riis Andersen
We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image, under the assumption that roof geometry follows a set of regular patterns.
no code implementations • 27 Feb 2023 • Marloes Arts, Jes Frellsen, Wouter Boomsma
After the recent ground-breaking advances in protein structure prediction, one of the remaining challenges in protein machine learning is to reliably predict distributions of structural states.
no code implementations • 6 Dec 2022 • Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem.
no code implementations • 2 Dec 2022 • Maxim Khomiakov, Julius Holbech Radzikowski, Carl Anton Schmidt, Mathias Bonde Sørensen, Mads Andersen, Michael Riis Andersen, Jes Frellsen
The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets.
1 code implementation • 20 Oct 2022 • Dennis Ulmer, Jes Frellsen, Christian Hardmeier
We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages.
1 code implementation • 14 Apr 2022 • Dennis Ulmer, Christian Hardmeier, Jes Frellsen
A lot of Machine Learning (ML) and Deep Learning (DL) research is of an empirical nature.
no code implementations • 2 Mar 2022 • Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen
These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model.
1 code implementation • 22 Feb 2022 • Simon Bartels, Kristoffer Stensbo-Smidt, Pablo Moreno-Muñoz, Wouter Boomsma, Jes Frellsen, Søren Hauberg
We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data.
1 code implementation • 22 Feb 2022 • Jakob D. Havtorn, Lasse Borgholt, Søren Hauberg, Jes Frellsen, Lars Maaløe
Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech.
no code implementations • 26 Jan 2022 • Pierre-Alexandre Mattei, Jes Frellsen
Inspired by this simple monotonicity theorem, we present a series of nonasymptotic results that link properties of Monte Carlo estimates to tightness of MCOs.
no code implementations • NeurIPS 2021 • Cong Geng, Jia Wang, Zhiyong Gao, Jes Frellsen, Søren Hauberg
Energy-based models (EBMs) provide an elegant framework for density estimation, but they are notoriously difficult to train.
no code implementations • 6 Oct 2021 • Dennis Ulmer, Christian Hardmeier, Jes Frellsen
Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout.
no code implementations • ICLR 2022 • Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen
To address supervised deep learning with missing values, we propose to marginalize over missing values in a joint model of covariates and outcomes.
no code implementations • 29 Sep 2021 • Jakob Drachmann Havtorn, Lasse Borgholt, Jes Frellsen, Søren Hauberg, Lars Maaløe
While stochastic latent variable models (LVMs) now achieve state-of-the-art performance on natural image generation, they are still inferior to deterministic models on speech.
4 code implementations • 16 Feb 2021 • Jakob D. Havtorn, Jes Frellsen, Søren Hauberg, Lars Maaløe
Deep generative models have been demonstrated as state-of-the-art density estimators.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 12 Feb 2021 • Samuel Wiqvist, Jes Frellsen, Umberto Picchini
We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm.
1 code implementation • ICLR 2021 • Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference.
1 code implementation • 10 Feb 2019 • Anton Mallasto, Jes Frellsen, Wouter Boomsma, Aasa Feragen
We contribute to the WGAN literature by introducing the family of $(q, p)$-Wasserstein GANs, which allow the use of more general $p$-Wasserstein metrics for $p\geq 1$ in the GAN learning procedure.
1 code implementation • 29 Jan 2019 • Samuel Wiqvist, Pierre-Alexandre Mattei, Umberto Picchini, Jes Frellsen
We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries.
no code implementations • 6 Dec 2018 • Pierre-Alexandre Mattei, Jes Frellsen
Our approach, called MIWAE, is based on the importance-weighted autoencoder (IWAE), and maximises a potentially tight lower bound of the log-likelihood of the observed data.
no code implementations • NeurIPS 2018 • Pierre-Alexandre Mattei, Jes Frellsen
Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model.
no code implementations • NeurIPS 2017 • Wouter Boomsma, Jes Frellsen
We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting.
1 code implementation • 13 Jul 2017 • Thomas Brouwer, Jes Frellsen, Pietro Lió
In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data.
1 code implementation • 26 Oct 2016 • Thomas Brouwer, Jes Frellsen, Pietro Lio'
We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation.
no code implementations • 16 Feb 2016 • Alexandre K. W. Navarro, Jes Frellsen, Richard E. Turner
First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution.