1 code implementation • 17 Jul 2023 • Luis Pedro Silvestrin, Shujian Yu, Mark Hoogendoorn
In this paper, we revisit the robustness of the minimum error entropy (MEE) criterion, a widely used objective in statistical signal processing to deal with non-Gaussian noises, and investigate its feasibility and usefulness in real-life transfer learning regression tasks, where distributional shifts are common.
1 code implementation • 26 Jun 2023 • Leonardos Pantiskas, Kees Verstoep, Mark Hoogendoorn, Henri Bal
Nowadays, the deployment of deep learning models on edge devices for addressing real-world classification problems is becoming more prevalent.
1 code implementation • 25 Jan 2023 • David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, Jan-Jakob Sonke
Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data.
no code implementations • 13 Dec 2022 • Olivier Moulin, Vincent Francois-Lavet, Mark Hoogendoorn
An eco-system of agents each having their own policy with some, but limited, generalizability has proven to be a reliable approach to increase generalization across procedurally generated environments.
1 code implementation • 31 Oct 2022 • Jacob E. Kooi, Mark Hoogendoorn, Vincent François-Lavet
In the context of MDPs with high-dimensional states, downstream tasks are predominantly applied on a compressed, low-dimensional representation of the original input space.
1 code implementation • 14 Oct 2022 • Leonardos Pantiskas, Kees Verstoep, Mark Hoogendoorn, Henri Bal
We also show that if we keep the transformation method constant, there is a statistically significant difference in accuracy results when applying it across different dimensions, with accuracy differences ranging from 0. 23 to 47. 79 percentage points.
1 code implementation • 7 Jun 2022 • David W. Romero, David M. Knigge, Albert Gu, Erik J. Bekkers, Efstratios Gavves, Jakub M. Tomczak, Mark Hoogendoorn
The use of Convolutional Neural Networks (CNNs) is widespread in Deep Learning due to a range of desirable model properties which result in an efficient and effective machine learning framework.
no code implementations • 13 Apr 2022 • Olivier Moulin, Vincent Francois-Lavet, Paul Elbers, Mark Hoogendoorn
Adapting a Reinforcement Learning (RL) agent to an unseen environment is a difficult task due to typical over-fitting on the training environment.
1 code implementation • 4 Apr 2022 • Leonardos Pantiskas, Kees Verstoep, Mark Hoogendoorn, Henri Bal
We show that we achieve speedup ranging from 9x to 53x compared to ROCKET during inference on an edge device, on datasets with comparable accuracy.
1 code implementation • 24 Mar 2022 • Etienne van de Bijl, Jan Klein, Joris Pries, Sandjai Bhulai, Mark Hoogendoorn, Rob van der Mei
Summarizing, the DD baseline is: (1) general, as it is applicable to all binary classification problems; (2) simple, as it is quickly determined without training or parameter-tuning; (3) informative, as insightful conclusions can be drawn from the results.
1 code implementation • 10 Feb 2022 • Luis Pedro Silvestrin, Harry van Zanten, Mark Hoogendoorn, Ger Koole
On the other hand, combining these new inputs with historical data remains a challenge that has not yet been studied in enough detail.
1 code implementation • ICLR 2022 • David W. Romero, Robert-Jan Bruintjes, Jakub M. Tomczak, Erik J. Bekkers, Mark Hoogendoorn, Jan C. van Gemert
In this work, we propose FlexConv, a novel convolutional operation with which high bandwidth convolutional kernels of learnable kernel size can be learned at a fixed parameter cost.
no code implementations • 29 Sep 2021 • Leonardos Pantiskas, Kees Verstoep, Mark Hoogendoorn, Henri Bal
Specifically, utilizing a wavelet scattering transformation of the time series and distributed feature selection, we manage to create a solution which employs just 2, 5% of the ROCKET features, while achieving accuracy comparable to recent deep learning solutions.
no code implementations • 13 Aug 2021 • Jan Klein, Sandjai Bhulai, Mark Hoogendoorn, Rob van der Mei
These approaches choose speci? fic unlabeled instances by a query function that are expected to improve overall classi? cation performance.
1 code implementation • 22 Jul 2021 • Luis P. Silvestrin, Leonardos Pantiskas, Mark Hoogendoorn
Time-series forecasting plays an important role in many domains.
no code implementations • 29 Mar 2021 • Ali el Hassouni, Mark Hoogendoorn, Marketa Ciharova, Annet Kleiboer, Khadicha Amarti, Vesa Muhonen, Heleen Riper, A. E. Eiben
We implemented our open-source RL architecture and integrated it with the MoodBuster mobile application for mental health to provide messages to increase daily adherence to the online therapeutic modules.
1 code implementation • ICLR 2022 • David W. Romero, Anna Kuzina, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Convolutional networks are unable to handle sequences of unknown size and their memory horizon must be defined a priori.
Ranked #5 on Sequential Image Classification on Sequential MNIST
no code implementations • 3 Nov 2020 • Daniel Lutscher, Ali el Hassouni, Maarten Stol, Mark Hoogendoorn
Finding well-defined clusters in data represents a fundamental challenge for many data-driven applications, and largely depends on good data representation.
1 code implementation • 9 Jun 2020 • David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
In this work, we fill this gap by leveraging the symmetries inherent to time-series for the construction of equivariant neural network.
1 code implementation • ICML 2020 • David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e. g., relative positions and poses).
no code implementations • ICLR 2020 • David W. Romero, Mark Hoogendoorn
Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping.
no code implementations • 1 Aug 2019 • Floris den Hengst, Mark Hoogendoorn, Frank van Harmelen, Joost Bosman
Reinforcement Learning methods that optimize dialogue policies have seen successes in past years and have recently been extended into methods that personalize the dialogue, e. g. take the personal context of users into account.
no code implementations • 11 Apr 2019 • Mark Hoogendoorn, Ward van Breda, Jeroen Ruwaard
The huge wealth of data in the health domain can be exploited to create models that predict development of health states over time.
no code implementations • 3 Apr 2019 • Seyed Amin Tabatabaei, Xixi Lu, Mark Hoogendoorn, Hajo A. Reijers
In this paper we propose an approach that is able to find groups of patients based on a small sample of positive examples given by a domain expert.
no code implementations • 10 Apr 2018 • Ali el Hassouni, Mark Hoogendoorn, Martijn van Otterlo, A. E. Eiben, Vesa Muhonen, Eduardo Barbaro
The time to learn intervention policies is limited as disengagement from the user can occur quickly.