Search Results for author: Martin Trapp

Found 22 papers, 17 papers with code

Flatness Improves Backbone Generalisation in Few-shot Classification

no code implementations11 Apr 2024 Rui Li, Martin Trapp, Marcus Klasson, Arno Solin

Deployment of deep neural networks in real-world settings typically requires adaptation to new tasks with few examples.

Classification

Characteristic Circuits

1 code implementation NeurIPS 2023 Zhongjie Yu, Martin Trapp, Kristian Kersting

In many real-world scenarios, it is crucial to be able to reliably and efficiently reason under uncertainty while capturing complex relationships in data.

Subtractive Mixture Models via Squaring: Representation and Learning

2 code implementations1 Oct 2023 Lorenzo Loconte, Aleksanteri M. Sladek, Stefan Mengel, Martin Trapp, Arno Solin, Nicolas Gillis, Antonio Vergari

Mixture models are traditionally represented and learned by adding several distributions as components.

Fixing Overconfidence in Dynamic Neural Networks

1 code implementation13 Feb 2023 Lassi Meronen, Martin Trapp, Andrea Pilzer, Le Yang, Arno Solin

Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the inputs.

Decision Making Uncertainty Quantification

Transport with Support: Data-Conditional Diffusion Bridges

1 code implementation31 Jan 2023 Ella Tamir, Martin Trapp, Arno Solin

We integrate Bayesian filtering and optimal control into learning the diffusion process, enabling the generation of constrained stochastic processes governed by sparse observations at intermediate stages and terminal constraints.

Time Series

Uncertainty-guided Source-free Domain Adaptation

1 code implementation16 Aug 2022 Subhankar Roy, Martin Trapp, Andrea Pilzer, Juho Kannala, Nicu Sebe, Elisa Ricci, Arno Solin

Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.

Source-Free Domain Adaptation

Disentangling Model Multiplicity in Deep Learning

no code implementations17 Jun 2022 Ari Heljakka, Martin Trapp, Juho Kannala, Arno Solin

This observed 'predictive' multiplicity (PM) also implies elusive differences in the internals of the models, their 'representational' multiplicity (RM).

Periodic Activation Functions Induce Stationarity

2 code implementations NeurIPS 2021 Lassi Meronen, Martin Trapp, Arno Solin

Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret.

Translation

Leveraging Probabilistic Circuits for Nonparametric Multi-Output Regression

1 code implementation16 Jun 2021 Zhongjie Yu, Mingye Zhu, Martin Trapp, Arseny Skryagin, Kristian Kersting

Inspired by recent advances in the field of expert-based approximations of Gaussian processes (GPs), we present an expert-based approach to large-scale multi-output regression using single-output GP experts.

Gaussian Processes regression

Deep Residual Mixture Models

1 code implementation22 Jun 2020 Perttu Hämäläinen, Martin Trapp, Tuure Saloheimo, Arno Solin

We propose Deep Residual Mixture Models (DRMMs), a novel deep generative model architecture.

BIG-bench Machine Learning

Sum-Product-Transform Networks: Exploiting Symmetries using Invertible Transformations

2 code implementations4 May 2020 Tomas Pevny, Vasek Smidl, Martin Trapp, Ondrej Polacek, Tomas Oberhuber

In this work, we propose Sum-Product-Transform Networks (SPTN), an extension of sum-product networks that uses invertible transformations as additional internal nodes.

Anomaly Detection Density Estimation

Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits

1 code implementation ICML 2020 Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani

Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines.

DynamicPPL: Stan-like Speed for Dynamic Probabilistic Models

2 code implementations7 Feb 2020 Mohamed Tarek, Kai Xu, Martin Trapp, Hong Ge, Zoubin Ghahramani

Since DynamicPPL is a modular, stand-alone library, any probabilistic programming system written in Julia, such as Turing. jl, can use DynamicPPL to specify models and trace their model parameters.

Probabilistic Programming

Graph Tracking in Dynamic Probabilistic Programs via Source Transformations

no code implementations pproximateinference AABI Symposium 2019 Philipp Gabler, Martin Trapp, Hong Ge, Franz Pernkopf

Many modern machine learning algorithms, such as automatic differentiation (AD) and versions of approximate Bayesian inference, can be understood as a particular case of message passing on some computation graph.

BIG-bench Machine Learning Probabilistic Programming

AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms

1 code implementation pproximateinference AABI Symposium 2019 Kai Xu, Hong Ge, Will Tebbutt, Mohamed Tarek, Martin Trapp, Zoubin Ghahramani

Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm.

Bayesian Inference Benchmarking

Deep Structured Mixtures of Gaussian Processes

1 code implementation10 Oct 2019 Martin Trapp, Robert Peharz, Franz Pernkopf, Carl E. Rasmussen

Gaussian Processes (GPs) are powerful non-parametric Bayesian regression models that allow exact posterior inference, but exhibit high computational and memory costs.

Gaussian Processes

Bayesian Learning of Sum-Product Networks

1 code implementation NeurIPS 2019 Martin Trapp, Robert Peharz, Hong Ge, Franz Pernkopf, Zoubin Ghahramani

While parameter learning in SPNs is well developed, structure learning leaves something to be desired: Even though there is a plethora of SPN structure learners, most of them are somewhat ad-hoc and based on intuition rather than a clear learning principle.

Optimisation of Overparametrized Sum-Product Networks

1 code implementation20 May 2019 Martin Trapp, Robert Peharz, Franz Pernkopf

It seems to be a pearl of conventional wisdom that parameter learning in deep sum-product networks is surprisingly fast compared to shallow mixture models.

Learning Deep Mixtures of Gaussian Process Experts Using Sum-Product Networks

1 code implementation12 Sep 2018 Martin Trapp, Robert Peharz, Carl E. Rasmussen, Franz Pernkopf

In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference.

Gaussian Processes regression +1

Probabilistic Deep Learning using Random Sum-Product Networks

no code implementations5 Jun 2018 Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, Zoubin Ghahramani

The need for consistent treatment of uncertainty has recently triggered increased interest in probabilistic deep learning methods.

Probabilistic Deep Learning

Safe Semi-Supervised Learning of Sum-Product Networks

1 code implementation10 Oct 2017 Martin Trapp, Tamas Madl, Robert Peharz, Franz Pernkopf, Robert Trappl

In several domains obtaining class annotations is expensive while at the same time unlabelled data are abundant.

Cannot find the paper you are looking for? You can Submit a new open access paper.