Search Results for author: Frederik Warburg

Found 14 papers, 7 papers with code

Learning to Taste: A Multimodal Wine Dataset

1 code implementation NeurIPS 2023 Thoranna Bender, Simon Moe Sørensen, Alireza Kashani, K. Eldjarn Hjorleifsson, Grethe Hyldig, Søren Hauberg, Serge Belongie, Frederik Warburg

We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and aligns with the intricate human perception of flavor.

DAC: Detector-Agnostic Spatial Covariances for Deep Local Features

1 code implementation20 May 2023 Javier Tirado-Garín, Frederik Warburg, Javier Civera

Current deep visual local feature detectors do not model the spatial uncertainty of detected features, producing suboptimal results in downstream applications.

Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs

1 code implementation ICCV 2023 Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski, Angjoo Kanazawa

Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such as floaters or flawed geometry when rendered outside the camera trajectory.

Novel View Synthesis

Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty

no code implementations23 Mar 2023 Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Aasa Feragen, Frederik Warburg

To ensure robustness to such incorrect segmentations, we propose Laplacian Segmentation Networks (LSN) that jointly model epistemic (model) and aleatoric (data) uncertainty in image segmentation.

Image Segmentation Segmentation +1

Searching for Structure in Unfalsifiable Claims

1 code implementation19 Aug 2022 Peter Ebert Christensen, Frederik Warburg, Menglin Jia, Serge Belongie

In this work, we aim to distill such posts into a small set of narratives that capture the essential claims related to a given topic.

Fact Checking Topic Models

Laplacian Autoencoders for Learning Stochastic Representations

1 code implementation30 Jun 2022 Marco Miani, Frederik Warburg, Pablo Moreno-Muñoz, Nicke Skafte Detlefsen, Søren Hauberg

In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence.

Bayesian Inference Out-of-Distribution Detection +1

SparseFormer: Attention-based Depth Completion Network

no code implementations9 Jun 2022 Frederik Warburg, Michael Ramamonjisoa, Manuel López-Antequera

This remains a challenging problem due to the low density, non-uniform and outlier-prone 3D landmarks produced by SfM and SLAM pipelines.

Depth Completion

Volumetric Disentanglement for 3D Scene Manipulation

no code implementations6 Jun 2022 Sagie Benaim, Frederik Warburg, Peter Ebert Christensen, Serge Belongie

To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background.

Disentanglement Object

Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval

no code implementations ICCV 2021 Frederik Warburg, Martin Jørgensen, Javier Civera, Søren Hauberg

Uncertainty quantification in image retrieval is crucial for downstream decisions, yet it remains a challenging and largely unexplored problem.

Computational Efficiency Image Retrieval +2

Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition

no code implementations CVPR 2020 Frederik Warburg, Soren Hauberg, Manuel Lopez-Antequera, Pau Gargallo, Yubin Kuang, Javier Civera

Lifelong place recognition is an essential and challenging task in computer vision with vast applications in robust localization and efficient large-scale 3D reconstruction.

3D Reconstruction

Probabilistic Spatial Transformer Networks

1 code implementation7 Apr 2020 Pola Schwöbel, Frederik Warburg, Martin Jørgensen, Kristoffer H. Madsen, Søren Hauberg

Spatial Transformer Networks (STNs) estimate image transformations that can improve downstream tasks by `zooming in' on relevant regions in an image.

Data Augmentation Time Series +2

Cannot find the paper you are looking for? You can Submit a new open access paper.