Search Results for author: David M. J. Tax

Found 24 papers, 6 papers with code

Learning From Scenarios for Stochastic Repairable Scheduling

1 code implementation6 Dec 2023 Kim van den Houten, David M. J. Tax, Esteban Freydell, Mathijs de Weerdt

We are interested in a stochastic scheduling problem, in which processing times are uncertain, which brings uncertain values in the constraints, and thus repair of an initial schedule may be needed.

Scheduling Stochastic Optimization

Personalized Anomaly Detection in PPG Data using Representation Learning and Biometric Identification

no code implementations12 Jul 2023 Ramin Ghorbani, Marcel J. T. Reinders, David M. J. Tax

This paper introduces a two-stage framework leveraging representation learning and personalization to improve anomaly detection performance in PPG data.

Photoplethysmography (PPG) Representation Learning +1

iPINNs: Incremental learning for Physics-informed neural networks

no code implementations10 Apr 2023 Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa

Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs).

Incremental Learning Multi-Task Learning

Self-Supervised PPG Representation Learning Shows High Inter-Subject Variability

1 code implementation7 Dec 2022 Ramin Ghorbani, Marcel J. T. Reinders, David M. J. Tax

Unfortunately, there is high inter-subject variability in the SSL-learned representations, which makes working with this data more challenging when labeled data is scarce.

Activity Recognition Representation Learning +3

A view on model misspecification in uncertainty quantification

no code implementations30 Oct 2022 Yuko Kato, David M. J. Tax, Marco Loog

Estimating uncertainty of machine learning models is essential to assess the quality of the predictions that these models provide.

Uncertainty Quantification

Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

1 code implementation9 Aug 2022 Aleksandr Dekhovich, David M. J. Tax, Marcel H. F. Sluiter, Miguel A. Bessa

In particular, CP&S is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94% with negligible forgetting, a first-of-its-kind result in class-incremental learning.

Class Incremental Learning Incremental Learning +1

Conversation Group Detection With Spatio-Temporal Context

no code implementations2 Jun 2022 Stephanie Tan, David M. J. Tax, Hayley Hung

These affinity values are also continuous in time, since relationships and group membership do not occur instantaneously, even though the ground truths of group membership are binary.

Graph Clustering

Neural network relief: a pruning algorithm based on neural activity

1 code implementation22 Sep 2021 Aleksandr Dekhovich, David M. J. Tax, Marcel H. F. Sluiter, Miguel A. Bessa

Current deep neural networks (DNNs) are overparameterized and use most of their neuronal connections during inference for each task.

A Brief Prehistory of Double Descent

no code implementations7 Apr 2020 Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax

In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.

Prehistory

Characterizing multiple instance datasets

no code implementations21 Jun 2018 Veronika Cheplygina, David M. J. Tax

When performing a comparison of different MIL classifiers, it is important to understand the differences of the datasets, used in the comparison.

Multiple Instance Learning

Unsupervised Learning of Sequence Representations by Autoencoders

no code implementations3 Apr 2018 Wenjie Pei, David M. J. Tax

Sequence data is challenging for machine learning approaches, because the lengths of the sequences may vary between samples.

Attended End-to-end Architecture for Age Estimation from Facial Expression Videos

no code implementations23 Nov 2017 Wenjie Pei, Hamdi Dibeklioğlu, Tadas Baltrušaitis, David M. J. Tax

In this paper, we present an end-to-end architecture for age estimation, called Spatially-Indexed Attention Model (SIAM), which is able to simultaneously learn both the appearance and dynamics of age from raw videos of facial expressions.

Age Estimation

Interacting Attention-gated Recurrent Networks for Recommendation

no code implementations5 Sep 2017 Wenjie Pei, Jie Yang, Zhu Sun, Jie Zhang, Alessandro Bozzon, David M. J. Tax

In particular, we propose a novel attention scheme to learn the attention scores of user and item history in an interacting way, thus to account for the dependencies between user and item dynamics in shaping user-item interactions.

Label Stability in Multiple Instance Learning

no code implementations15 Mar 2017 Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Marleen de Bruijne, Marco Loog

We address the problem of \emph{instance label stability} in multiple instance learning (MIL) classifiers.

Multiple Instance Learning

Temporal Attention-Gated Model for Robust Sequence Classification

1 code implementation CVPR 2017 Wenjie Pei, Tadas Baltrušaitis, David M. J. Tax, Louis-Philippe Morency

An important advantage of our approach is interpretability since the temporal attention weights provide a meaningful value for the salience of each time step in the sequence.

Classification General Classification +1

Modeling Time Series Similarity with Siamese Recurrent Networks

no code implementations15 Mar 2016 Wenjie Pei, David M. J. Tax, Laurens van der Maaten

Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision.

domain classification General Classification +5

Survey on the attention based RNN model and its applications in computer vision

no code implementations25 Jan 2016 Feng Wang, David M. J. Tax

In this survey, we introduce some attention based RNN models which can focus on different parts of the input for each output item, in order to explore and take advantage of the implicit relations between the input and the output items.

Implicit Relations

Time Series Classification using the Hidden-Unit Logistic Model

no code implementations16 Jun 2015 Wenjie Pei, Hamdi Dibeklioğlu, David M. J. Tax, Laurens van der Maaten

We present a new model for time series classification, called the hidden-unit logistic model, that uses binary stochastic hidden units to model latent structure in the data.

Action Recognition Action Unit Detection +9

On Classification with Bags, Groups and Sets

no code implementations2 Jun 2014 Veronika Cheplygina, David M. J. Tax, Marco Loog

To better deal with such problems, several extensions of supervised learning have been proposed, where either training and/or test objects are sets of feature vectors.

Classification General Classification

Quantile Representation for Indirect Immunofluorescence Image Classification

no code implementations6 Feb 2014 David M. J. Tax, Veronika Cheplygina, Marco Loog

Considering one whole slide as a collection (a bag) of feature vectors, however, poses the problem of how to handle this bag.

Classification General Classification +1

Dissimilarity-based Ensembles for Multiple Instance Learning

no code implementations6 Feb 2014 Veronika Cheplygina, David M. J. Tax, Marco Loog

In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors.

Multiple Instance Learning

Multiple Instance Learning with Bag Dissimilarities

no code implementations22 Sep 2013 Veronika Cheplygina, David M. J. Tax, Marco Loog

Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous.

Multiple Instance Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.