3 code implementations • 4 Feb 2022 • Patrick Kidger
Topics include: neural ordinary differential equations (e. g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e. g. for learning functions of irregular time series); and neural stochastic differential equations (e. g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions).
1 code implementation • 30 Oct 2021 • Patrick Kidger, Cristian Garcia
One: parameterised functions are themselves represented as `PyTrees', which means that the parameterisation of a function is transparent to the JAX framework.
2 code implementations • 21 Jun 2021 • James Morrill, Patrick Kidger, Lingyi Yang, Terry Lyons
This is fine when the whole time series is observed in advance, but means that Neural CDEs are not suitable for use in \textit{online prediction tasks}, where predictions need to be made in real-time: a major use case for recurrent networks.
2 code implementations • NeurIPS 2021 • Patrick Kidger, James Foster, Xuechen Li, Terry Lyons
This reduces computational cost (giving up to a $1. 87\times$ speedup) and removes the numerical truncation errors associated with gradient penalty.
1 code implementation • 6 Feb 2021 • Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, Terry Lyons
Stochastic differential equations (SDEs) are a staple of mathematical modelling of temporal dynamics.
no code implementations • 1 Jan 2021 • Patrick Kidger, Ricky T. Q. Chen, Terry Lyons
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver.
no code implementations • 1 Jan 2021 • Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, Terry Lyons
Several authors have introduced \emph{Neural Stochastic Differential Equations} (Neural SDEs), often involving complex theory with various limitations.
no code implementations • 28 Sep 2020 • James Morrill, Patrick Kidger, Cristopher Salvi, James Foster, Terry Lyons
Neural Controlled Differential Equations (Neural CDEs) are the continuous-time analogue of an RNN, just as Neural ODEs are analogous to ResNets.
3 code implementations • 20 Sep 2020 • Patrick Kidger, Ricky T. Q. Chen, Terry Lyons
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver.
3 code implementations • 17 Sep 2020 • James Morrill, Cristopher Salvi, Patrick Kidger, James Foster, Terry Lyons
Neural controlled differential equations (CDEs) are the continuous-time analogue of recurrent neural networks, as Neural ODEs are to residual networks, and offer a memory-efficient continuous-time way to model functions of potentially irregular time series.
Ranked #4 on Time Series Classification on EigenWorms
1 code implementation • 1 Jun 2020 • James Morrill, Adeline Fermanian, Patrick Kidger, Terry Lyons
There is a great deal of flexibility as to how this method can be applied.
2 code implementations • 28 May 2020 • Patrick Kidger, James Morrill, Terry Lyons
The shapelet transform is a form of feature extraction for time series, in which a time series is described by its similarity to each of a collection of `shapelets'.
5 code implementations • NeurIPS 2020 • Patrick Kidger, James Morrill, James Foster, Terry Lyons
The resulting \emph{neural controlled differential equation} model is directly applicable to the general setting of partially-observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efficient adjoint-based backpropagation even across observations.
1 code implementation • ICLR 2021 • Patrick Kidger, Terry Lyons
Signatory is a library for calculating and performing functionality related to the signature and logsignature transforms.
3 code implementations • NeurIPS 2019 • Patric Bonnier, Patrick Kidger, Imanol Perez Arribas, Cristopher Salvi, Terry Lyons
The signature is an infinite graded sequence of statistics known to characterise a stream of data up to a negligible equivalence class.
no code implementations • 21 May 2019 • Patrick Kidger, Terry Lyons
The classical Universal Approximation Theorem holds for neural networks of arbitrary width and bounded depth.