Search Results for author: Daniel F. Schmidt

Found 10 papers, 7 papers with code

Computing Marginal and Conditional Divergences between Decomposable Models with Applications

no code implementations13 Oct 2023 Loong Kuan Lee, Geoffrey I. Webb, Daniel F. Schmidt, Nico Piatkowski

Doing so tractably is non-trivial as we need to decompose the divergence between these distributions and therefore, require a decomposition over the marginal and conditional distributions of these models.

QUANT: A Minimalist Interval Method for Time Series Classification

1 code implementation2 Aug 2023 Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb

We show that it is possible to achieve the same accuracy, on average, as the most accurate existing interval methods for time series classification on a standard set of benchmark datasets using a single type of feature (quantiles), fixed intervals, and an 'off the shelf' classifier.

Classification Time Series +1

Sparse Horseshoe Estimation via Expectation-Maximisation

1 code implementation7 Nov 2022 Shu Yu Tew, Daniel F. Schmidt, Enes Makalic

A particular strength of our approach is that the M-step depends only on the form of the prior and it is independent of the form of the likelihood.

HYDRA: Competing convolutional kernels for fast and accurate time series classification

1 code implementation25 Mar 2022 Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb

We present HYDRA, a simple, fast, and accurate dictionary method for time series classification using competing convolutional kernels, combining key aspects of both ROCKET and conventional dictionary methods.

Time Series Time Series Analysis +1

MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification

2 code implementations16 Dec 2020 Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb

ROCKET achieves state-of-the-art accuracy with a fraction of the computational expense of most existing methods by transforming input time series using random convolutional kernels, and using the transformed features to train a linear classifier.

General Classification Time Series +2

Log-Scale Shrinkage Priors and Adaptive Bayesian Global-Local Shrinkage Estimation

no code implementations8 Jan 2018 Daniel F. Schmidt, Enes Makalic

Simulations show that the adaptive log-$t$ procedure appears to always perform well, irrespective of the level of sparsity or signal-to-noise ratio of the underlying model.

Cannot find the paper you are looking for? You can Submit a new open access paper.