no code implementations • 29 Jun 2023 • Lyse Naomi Wamba Momo, Nyalleng Moorosi, Elaine O. Nsoesie, Frank Rademakers, Bart De Moor
In this study, we predict early hospital LoS at the granular level of admission units by applying domain adaptation to leverage information learned from a potential source domain.
2 code implementations • 26 May 2023 • Sonny Achten, Arun Pandey, Hannes De Meulemeester, Bart De Moor, Johan A. K. Suykens
We propose a unifying setting that combines existing restricted kernel machine methods into a single primal-dual multi-view framework for kernel principal component analysis in both supervised and unsupervised settings.
1 code implementation • 28 Apr 2023 • Vincent Scheltjens, Lyse Naomi Wamba Momo, Wouter Verbeke, Bart De Moor
In this work, we address the step prior to the initiation of a federated network for model training, client recruitment.
no code implementations • 24 Jan 2023 • Arun Pandey, Hannes De Meulemeester, Bart De Moor, Johan A. K. Suykens
In this paper, we propose a kernel principal component analysis model for multi-variate time series forecasting, where the training and prediction schemes are derived from the multi-view formulation of Restricted Kernel Machines.
no code implementations • 6 Apr 2021 • Joachim Schreurs, Hannes De Meulemeester, Michaël Fanuel, Bart De Moor, Johan A. K. Suykens
A generative model may overlook underrepresented modes that are less frequent in the empirical data distribution.
no code implementations • 28 Sep 2020 • Hannes De Meulemeester, Joachim Schreurs, Michaël Fanuel, Bart De Moor, Johan Suykens
However, under certain circumstances, the training of GANs can lead to mode collapse or mode dropping, i. e. the generative models not being able to sample from the entire probability distribution.
no code implementations • 16 Jun 2020 • Hannes De Meulemeester, Joachim Schreurs, Michaël Fanuel, Bart De Moor, Johan A. K. Suykens
However, under certain circumstances, the training of GANs can lead to mode collapse or mode dropping, i. e. the generative models not being able to sample from the entire probability distribution.
1 code implementation • 8 Mar 2018 • Oliver Lauwers, Bart De Moor
In this way, we provide a purely data-driven way to assess different underlying dynamics of input/output signal pairs, without the need for any system identification step.
no code implementations • 6 Mar 2017 • Oliver Lauwers, Bart De Moor
The first class of methods employs a distance measure on time series (e. g. Euclidean, Dynamic Time Warping) and a clustering technique (e. g. k-means, k-medoids, hierarchical clustering) to find natural groups in the dataset.
no code implementations • 28 Apr 2015 • Marc Claesen, Frank De Smet, Pieter Gillard, Chantal Mathieu, Bart De Moor
We present a novel risk profiling approach based exclusively on health expenditure data that is available to Belgian mutual health insurers.
2 code implementations • 26 Apr 2015 • Marc Claesen, Jesse Davis, Frank De Smet, Bart De Moor
We provide theoretical bounds on the quality of our estimates, illustrate the importance of estimating the fraction of positives in the unlabeled set and demonstrate empirically that we are able to reliably estimate ROC and PR curves on real data.
no code implementations • 7 Feb 2015 • Marc Claesen, Bart De Moor
We introduce the hyperparameter search problem in the field of machine learning and discuss its main challenges from an optimization perspective.
1 code implementation • 2 Dec 2014 • Marc Claesen, Jaak Simm, Dusan Popovic, Yves Moreau, Bart De Moor
Optunity is a free software package dedicated to hyperparameter optimization.
1 code implementation • 4 Mar 2014 • Marc Claesen, Frank De Smet, Johan Suykens, Bart De Moor
EnsembleSVM is a free software package containing efficient routines to perform ensemble learning with support vector machine (SVM) base models.
1 code implementation • 4 Mar 2014 • Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor
We present an approximation scheme for support vector machine models that use an RBF kernel.
1 code implementation • 13 Feb 2014 • Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor
The included benchmark comprises three settings with increasing label noise: (i) fully supervised, (ii) PU learning and (iii) PU learning with false positives.