Search Results for author: Jesse Read

Found 30 papers, 6 papers with code

Online Learning of Decision Trees with Thompson Sampling

no code implementations9 Apr 2024 Ayman Chaouki, Jesse Read, Albert Bifet

Recent breakthroughs addressed this suboptimality issue in the batch setting, but no such work has considered the online setting with data arriving in a stream.

Interpretable Machine Learning Thompson Sampling

A Historical Context for Data Streams

no code implementations18 Oct 2023 Indre Zliobaite, Jesse Read

Machine learning from data streams is an active and growing research area.

An Improved Yaw Control Algorithm for Wind Turbines via Reinforcement Learning

no code implementations2 May 2023 Alban Puech, Jesse Read

Yaw misalignment, measured as the difference between the wind direction and the nacelle position of a wind turbine, has consequences on the power output, the safety and the lifetime of the turbine and its wind park as a whole.

reinforcement-learning

Shapley Chains: Extending Shapley Values to Classifier Chains

1 code implementation30 Mar 2023 Célia Wafa Ayad, Thomas Bonnier, Benjamin Bosch, Jesse Read

Compared to existing methods, this approach allows to attribute a more complete feature contribution to the predictions of multi-output classification tasks.

Attribute Decision Making +1

Transferable Deep Metric Learning for Clustering

no code implementations13 Feb 2023 Simo Alami. C, Rim Kaddah, Jesse Read

Clustering in high dimension spaces is a difficult task; the usual distance metrics may no longer be appropriate under the curse of dimensionality.

Clustering Metric Learning

Chains of Autoreplicative Random Forests for missing value imputation in high-dimensional datasets

no code implementations2 Jan 2023 Ekaterina Antonenko, Jesse Read

In this paper, we consider missing value imputation as a multi-label classification problem and propose Chains of Autoreplicative Random Forests.

Denoising Imputation +1

Learning from Data Streams: An Overview and Update

no code implementations30 Dec 2022 Jesse Read, Indrė Žliobaitė

We propose to tackle these issues by reformulating the fundamental definitions and settings of supervised data-stream learning with regard to contemporary considerations of concept drift and temporal dependence; and we take a fresh look at what constitutes a supervised data-stream learning task, and a reconsideration of algorithms that may be applied to tackle such tasks.

Linear TreeShap

1 code implementation16 Sep 2022 Peng Yu, Chao Xu, Albert Bifet, Jesse Read

Decision trees are well-known due to their ease of interpretability.

Estimating Multi-label Accuracy using Labelset Distributions

no code implementations9 Sep 2022 Laurence A. F. Park, Jesse Read

In this article we estimate the expected accuracy as a surrogate for confidence, for a given accuracy metric.

Decision Making

From Multi-label Learning to Cross-Domain Transfer: A Model-Agnostic Approach

no code implementations24 Jul 2022 Jesse Read

In multi-label learning, a particular case of multi-task learning where a single data point is associated with multiple target labels, it was widely assumed in the literature that, to obtain best accuracy, the dependence among the labels should be explicitly modeled.

Multi-Label Learning Multi-Task Learning

On Merging Feature Engineering and Deep Learning for Diagnosis, Risk-Prediction and Age Estimation Based on the 12-Lead ECG

no code implementations13 Jul 2022 Eran Zvuloni, Jesse Read, Antônio H. Ribeiro, Antonio Luiz P. Ribeiro, Joachim A. Behar

Conclusion: We found that for traditional 12-lead ECG based diagnosis tasks DL did not yield a meaningful improvement over FE, while it improved significantly the nontraditional regression task.

Age Estimation BIG-bench Machine Learning +5

CAMEO: Curiosity Augmented Metropolis for Exploratory Optimal Policies

no code implementations19 May 2022 Simo Alami. C, Fernando Llorente, Rim Kaddah, Luca Martino, Jesse Read

We further show that the different policies we sample present different risk profiles, corresponding to interesting practical applications in interpretability, and represents a first step towards learning the distribution of optimal policies itself.

Optimality in Noisy Importance Sampling

no code implementations7 Jan 2022 Fernando Llorente, Luca Martino, Jesse Read, David Delgado-Gómez

In this work, we analyze the noisy importance sampling (IS), i. e., IS working with noisy evaluations of the target density.

A Survey on Semi-Supervised Learning for Delayed Partially Labelled Data Streams

no code implementations16 Jun 2021 Heitor Murilo Gomes, Maciej Grzenda, Rodrigo Mello, Jesse Read, Minh Huong Le Nguyen, Albert Bifet

Unlabelled data appear in many domains and are particularly relevant to streaming applications, where even though data is abundant, labelled data is rare.

Active Learning Benchmarking

A Joint introduction to Gaussian Processes and Relevance Vector Machines with Connections to Kalman filtering and other Kernel Smoothers

no code implementations19 Sep 2020 Luca Martino, Jesse Read

Our focus is on developing a common framework with which to view these methods, via intermediate methods a probabilistic version of the well-known kernel ridge regression, and drawing connections among them, via dual formulations, and discussion of their application in the context of major tasks: regression, smoothing, interpolation, and filtering.

Gaussian Processes regression

Better Sign Language Translation with STMC-Transformer

1 code implementation COLING 2020 Kayo Yin, Jesse Read

This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language.

 Ranked #1 on Sign Language Translation on ASLG-PC12 (using extra training data)

Sign Language Recognition Sign Language Translation +1

Classifier Chains: A Review and Perspectives

no code implementations26 Dec 2019 Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank

This performance led to further studies of how exactly it works, and how it could be improved, and in the recent decade numerous studies have explored classifier chains mechanisms on a theoretical level, and many improvements have been made to the training and inference procedures, such that this method remains among the state-of-the-art options for multi-label learning.

Multi-Label Classification Multi-Label Learning

Probabilistic Regressor Chains with Monte Carlo Methods

no code implementations18 Jul 2019 Jesse Read, Luca Martino

A large number and diversity of techniques have been offered in the literature in recent years for solving multi-label classification tasks, including classifier chains where predictions are cascaded to other models as additional features.

Multi-Label Classification

Concept-drifting Data Streams are Time Series; The Case for Continuous Adaptation

no code implementations4 Oct 2018 Jesse Read

A major focus in the data stream literature is on designing methods that can deal with concept drift, a challenge where the generating distribution changes over time.

Time Series Time Series Analysis

Perturb and Combine to Identify Influential Spreaders in Real-World Networks

no code implementations13 Jul 2018 Antoine J. -P. Tixier, Maria-Evgenia G. Rossi, Fragkiskos D. Malliaros, Jesse Read, Michalis Vazirgiannis

Some of the most effective influential spreader detection algorithms are unstable to small perturbations of the network structure.

Scikit-Multiflow: A Multi-output Streaming Framework

1 code implementation12 Jul 2018 Jacob Montiel, Jesse Read, Albert Bifet, Talel Abdessalem

Scikit-multiflow is a multi-output/multi-label and stream data mining framework for the Python programming language.

Multi-label Methods for Prediction with Sequential Data

1 code implementation27 Sep 2016 Jesse Read, Luca Martino, Jaakko Hollmén

In this paper we detect and elaborate on connections between multi-label methods and Markovian models, and study the suitability of multi-label methods for prediction in sequential data.

General Classification

Multi-label Classification using Labels as Hidden Nodes

no code implementations31 Mar 2015 Jesse Read, Jaakko Hollmén

We extend some recent discussion in the literature and provide a deeper analysis, namely, developing the view that label dependence is often introduced by an inadequate base classifier, rather than being inherent to the data or underlying concept; showing how even an exhaustive analysis of label dependence may not lead to an optimal classification structure.

Classification General Classification +1

Deep Learning for Multi-label Classification

no code implementations17 Dec 2014 Jesse Read, Fernando Perez-Cruz

In multi-label classification, the main focus has been to develop ways of learning the underlying dependencies between labels, and to take advantage of this at classification time.

Classification General Classification +1

Kaggle LSHTC4 Winning Solution

no code implementations3 May 2014 Antti Puurula, Jesse Read, Albert Bifet

The number of documents per label is chosen using label priors and thresholding of vote scores.

text-classification Text Classification

Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains

no code implementations9 Nov 2012 Jesse Read, Luca Martino, David Luengo

Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.