Search Results for author: Jes Frellsen

Found 28 papers, 11 papers with code

Polygonizer: An auto-regressive building delineator

no code implementations8 Apr 2023 Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen

In geospatial planning, it is often essential to represent objects in a vectorized format, as this format easily translates to downstream tasks such as web development, graphics, or design.

Semantic Segmentation

That Label's Got Style: Handling Label Style Bias for Uncertain Image Segmentation

no code implementations28 Mar 2023 Kilian Zepf, Eike Petersen, Jes Frellsen, Aasa Feragen

Segmentation uncertainty models predict a distribution over plausible segmentations for a given input, which they learn from the annotator variation in the training set.

Image Segmentation Segmentation +1

Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty

no code implementations23 Mar 2023 Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Aasa Feragen, Frederik Warburg

To ensure robustness to such incorrect segmentations, we propose Laplacian Segmentation Networks (LSN) that jointly model epistemic (model) and aleatoric (data) uncertainty in image segmentation.

Image Segmentation Segmentation +1

Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery

no code implementations20 Mar 2023 Maxim Khomiakov, Alejandro Valverde Mahou, Alba Reinders Sánchez, Jes Frellsen, Michael Riis Andersen

We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image, under the assumption that roof geometry follows a set of regular patterns.

Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters

no code implementations27 Feb 2023 Marloes Arts, Jes Frellsen, Wouter Boomsma

After the recent ground-breaking advances in protein structure prediction, one of the remaining challenges in protein machine learning is to reliably predict distributions of structural states.

Protein Structure Prediction

Explainability as statistical inference

no code implementations6 Dec 2022 Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei

The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem.

Imputation

Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity

1 code implementation20 Oct 2022 Dennis Ulmer, Jes Frellsen, Christian Hardmeier

We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages.

Model-agnostic out-of-distribution detection using combined statistical tests

no code implementations2 Mar 2022 Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen

These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model.

Out-of-Distribution Detection

Adaptive Cholesky Gaussian Processes

1 code implementation22 Feb 2022 Simon Bartels, Kristoffer Stensbo-Smidt, Pablo Moreno-Muñoz, Wouter Boomsma, Jes Frellsen, Søren Hauberg

We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data.

Gaussian Processes

Benchmarking Generative Latent Variable Models for Speech

1 code implementation22 Feb 2022 Jakob D. Havtorn, Lasse Borgholt, Søren Hauberg, Jes Frellsen, Lars Maaløe

Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech.

Benchmarking Image Generation +1

Uphill Roads to Variational Tightness: Monotonicity and Monte Carlo Objectives

no code implementations26 Jan 2022 Pierre-Alexandre Mattei, Jes Frellsen

Inspired by this simple monotonicity theorem, we present a series of nonasymptotic results that link properties of Monte Carlo estimates to tightness of MCOs.

Variational Inference

Bounds all around: training energy-based models with bidirectional bounds

no code implementations NeurIPS 2021 Cong Geng, Jia Wang, Zhiyong Gao, Jes Frellsen, Søren Hauberg

Energy-based models (EBMs) provide an elegant framework for density estimation, but they are notoriously difficult to train.

Density Estimation

Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation

no code implementations6 Oct 2021 Dennis Ulmer, Christian Hardmeier, Jes Frellsen

Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout.

How to deal with missing data in supervised deep learning?

no code implementations ICLR 2022 Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen

To address supervised deep learning with missing values, we propose to marginalize over missing values in a joint model of covariates and outcomes.

Inductive Bias Variational Inference

Towards Generative Latent Variable Models for Speech

no code implementations29 Sep 2021 Jakob Drachmann Havtorn, Lasse Borgholt, Jes Frellsen, Søren Hauberg, Lars Maaløe

While stochastic latent variable models (LVMs) now achieve state-of-the-art performance on natural image generation, they are still inferior to deterministic models on speech.

Image Generation Video Generation

Sequential Neural Posterior and Likelihood Approximation

1 code implementation12 Feb 2021 Samuel Wiqvist, Jes Frellsen, Umberto Picchini

We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm.

not-MIWAE: Deep Generative Modelling with Missing not at Random Data

1 code implementation ICLR 2021 Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen

When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference.

Variational Inference

(q,p)-Wasserstein GANs: Comparing Ground Metrics for Wasserstein GANs

1 code implementation10 Feb 2019 Anton Mallasto, Jes Frellsen, Wouter Boomsma, Aasa Feragen

We contribute to the WGAN literature by introducing the family of $(q, p)$-Wasserstein GANs, which allow the use of more general $p$-Wasserstein metrics for $p\geq 1$ in the GAN learning procedure.

Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation

1 code implementation29 Jan 2019 Samuel Wiqvist, Pierre-Alexandre Mattei, Umberto Picchini, Jes Frellsen

We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries.

Time Series Time Series Analysis

MIWAE: Deep Generative Modelling and Imputation of Incomplete Data

no code implementations6 Dec 2018 Pierre-Alexandre Mattei, Jes Frellsen

Our approach, called MIWAE, is based on the importance-weighted autoencoder (IWAE), and maximises a potentially tight lower bound of the log-likelihood of the observed data.

Imputation

Leveraging the Exact Likelihood of Deep Latent Variable Models

no code implementations NeurIPS 2018 Pierre-Alexandre Mattei, Jes Frellsen

Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model.

Imputation

Spherical convolutions and their application in molecular modelling

no code implementations NeurIPS 2017 Wouter Boomsma, Jes Frellsen

We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting.

Feature Engineering

Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation

1 code implementation13 Jul 2017 Thomas Brouwer, Jes Frellsen, Pietro Lió

In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data.

Bayesian Inference Model Selection

Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation

1 code implementation26 Oct 2016 Thomas Brouwer, Jes Frellsen, Pietro Lio'

We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation.

The Multivariate Generalised von Mises distribution: Inference and applications

no code implementations16 Feb 2016 Alexandre K. W. Navarro, Jes Frellsen, Richard E. Turner

First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.