no code implementations • NeurIPS 2023 • Alexander Modell, Ian Gallagher, Emma Ceccherini, Nick Whiteley, Patrick Rubin-Delanchy
We present a new representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
1 code implementation • NeurIPS 2023 • Annie Gray, Alexander Modell, Patrick Rubin-Delanchy, Nick Whiteley
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
no code implementations • 27 Oct 2022 • Hannah Sansford, Alexander Modell, Nick Whiteley, Patrick Rubin-Delanchy
Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes, in which link probabilities are inner products.
Graph Representation Learning Vocal Bursts Intensity Prediction
2 code implementations • 24 Aug 2022 • Nick Whiteley, Annie Gray, Patrick Rubin-Delanchy
The Manifold Hypothesis is a widely accepted tenet of Machine Learning which asserts that nominally high-dimensional data are in fact concentrated near a low-dimensional manifold, embedded in high-dimensional space.
1 code implementation • 26 May 2022 • Michael Whitehouse, Nick Whiteley, Lorenzo Rimella
In contrast to the popular ODE approach to compartmental modelling, in which a large population limit is used to motivate a deterministic model, PALs are derived from approximate filtering equations for finite-population, stochastic compartmental models, and the large population limit drives consistency of maximum PAL estimators.
1 code implementation • NeurIPS 2021 • Nick Whiteley, Annie Gray, Patrick Rubin-Delanchy
Given a graph or similarity matrix, we consider the problem of recovering a notion of true distance between the nodes, and so their true positions.
1 code implementation • 24 Jun 2020 • Nick Whiteley, Lorenzo Rimella
We introduce a new method for inference in stochastic epidemic models which uses recursive multinomial approximations to integrate over unobserved variables and thus circumvent likelihood intractability.
no code implementations • 15 Apr 2020 • Lorenzo Rimella, Nick Whiteley
We define an evolving in time Bayesian neural network called a Hidden Markov neural network.
1 code implementation • 25 Jun 2019 • Nick Whiteley
This note outlines a method for clustering time series based on a statistical model in which volatility shifts at unobserved change-points.
Methodology Statistical Finance
1 code implementation • 5 Feb 2019 • Lorenzo Rimella, Nick Whiteley
We propose algorithms for approximate filtering and smoothing in high-dimensional Factorial hidden Markov models.
no code implementations • 8 Oct 2018 • Nick Whiteley, Matt W. Jones, Aleks P. F. Domanski
Quantitative bounds on the distance to the infinite Viterbi alignment, which are the first of their kind, are derived and used to illustrate how approximate estimation via parallelization can be accurate and scaleable to high-dimensional problems because the rate of convergence to the infinite Viterbi alignment does not necessarily depend on $d$.