1 code implementation • 6 Dec 2023 • Kim van den Houten, David M. J. Tax, Esteban Freydell, Mathijs de Weerdt
We are interested in a stochastic scheduling problem, in which processing times are uncertain, which brings uncertain values in the constraints, and thus repair of an initial schedule may be needed.
no code implementations • 12 Jul 2023 • Ramin Ghorbani, Marcel J. T. Reinders, David M. J. Tax
This paper introduces a two-stage framework leveraging representation learning and personalization to improve anomaly detection performance in PPG data.
no code implementations • 10 Apr 2023 • Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa
Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs).
1 code implementation • 7 Dec 2022 • Ramin Ghorbani, Marcel J. T. Reinders, David M. J. Tax
Unfortunately, there is high inter-subject variability in the SSL-learned representations, which makes working with this data more challenging when labeled data is scarce.
no code implementations • 30 Oct 2022 • Yuko Kato, David M. J. Tax, Marco Loog
Estimating uncertainty of machine learning models is essential to assess the quality of the predictions that these models provide.
1 code implementation • 9 Aug 2022 • Aleksandr Dekhovich, David M. J. Tax, Marcel H. F. Sluiter, Miguel A. Bessa
In particular, CP&S is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94% with negligible forgetting, a first-of-its-kind result in class-incremental learning.
no code implementations • 2 Jun 2022 • Stephanie Tan, David M. J. Tax, Hayley Hung
These affinity values are also continuous in time, since relationships and group membership do not occur instantaneously, even though the ground truths of group membership are binary.
1 code implementation • 22 Sep 2021 • Aleksandr Dekhovich, David M. J. Tax, Marcel H. F. Sluiter, Miguel A. Bessa
Current deep neural networks (DNNs) are overparameterized and use most of their neuronal connections during inference for each task.
no code implementations • 7 Apr 2020 • Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax
In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.
no code implementations • 21 Jun 2018 • Veronika Cheplygina, David M. J. Tax
When performing a comparison of different MIL classifiers, it is important to understand the differences of the datasets, used in the comparison.
no code implementations • 3 Apr 2018 • Wenjie Pei, David M. J. Tax
Sequence data is challenging for machine learning approaches, because the lengths of the sequences may vary between samples.
no code implementations • 23 Nov 2017 • Wenjie Pei, Hamdi Dibeklioğlu, Tadas Baltrušaitis, David M. J. Tax
In this paper, we present an end-to-end architecture for age estimation, called Spatially-Indexed Attention Model (SIAM), which is able to simultaneously learn both the appearance and dynamics of age from raw videos of facial expressions.
no code implementations • 5 Sep 2017 • Wenjie Pei, Jie Yang, Zhu Sun, Jie Zhang, Alessandro Bozzon, David M. J. Tax
In particular, we propose a novel attention scheme to learn the attention scores of user and item history in an interacting way, thus to account for the dependencies between user and item dynamics in shaping user-item interactions.
no code implementations • 15 Mar 2017 • Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Jesper Holst Pedersen, Marco Loog, Marleen de Bruijne
Chronic obstructive pulmonary disease (COPD) is a lung disease where early detection benefits the survival rate.
no code implementations • 15 Mar 2017 • Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Marleen de Bruijne, Marco Loog
We address the problem of \emph{instance label stability} in multiple instance learning (MIL) classifiers.
1 code implementation • CVPR 2017 • Wenjie Pei, Tadas Baltrušaitis, David M. J. Tax, Louis-Philippe Morency
An important advantage of our approach is interpretability since the temporal attention weights provide a meaningful value for the salience of each time step in the sequence.
no code implementations • 15 Mar 2016 • Wenjie Pei, David M. J. Tax, Laurens van der Maaten
Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision.
no code implementations • 25 Jan 2016 • Feng Wang, David M. J. Tax
In this survey, we introduce some attention based RNN models which can focus on different parts of the input for each output item, in order to explore and take advantage of the implicit relations between the input and the output items.
no code implementations • 16 Jun 2015 • Wenjie Pei, Hamdi Dibeklioğlu, David M. J. Tax, Laurens van der Maaten
We present a new model for time series classification, called the hidden-unit logistic model, that uses binary stochastic hidden units to model latent structure in the data.
no code implementations • 2 Jun 2014 • Veronika Cheplygina, David M. J. Tax, Marco Loog
To better deal with such problems, several extensions of supervised learning have been proposed, where either training and/or test objects are sets of feature vectors.
no code implementations • 6 Feb 2014 • David M. J. Tax, Veronika Cheplygina, Marco Loog
Considering one whole slide as a collection (a bag) of feature vectors, however, poses the problem of how to handle this bag.
no code implementations • 6 Feb 2014 • Veronika Cheplygina, David M. J. Tax, Marco Loog
In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors.
no code implementations • 22 Sep 2013 • Veronika Cheplygina, David M. J. Tax, Marco Loog
Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous.