1 code implementation • NeurIPS 2023 • Kevin Course, Prasanth B. Nair
We consider the problem of inferring latent stochastic differential equations (SDEs) with a time and memory cost that scales independently with the amount of data, the total length of the time series, and the stiffness of the approximate differential equations.
1 code implementation • 11 Apr 2021 • Kevin L. Course, Trefor W. Evans, Prasanth B. Nair
We present a method for learning generalized Hamiltonian decompositions of ordinary differential equations given a set of noisy time series measurements.
1 code implementation • 4 Jun 2020 • Trefor W. Evans, Prasanth B. Nair
We introduce a stochastic variational inference procedure for training scalable Gaussian process (GP) models whose per-iteration complexity is independent of both the number of training points, $n$, and the number basis functions used in the kernel approximation, $m$.
2 code implementations • NeurIPS 2018 • Trefor W. Evans, Prasanth B. Nair
We explore a new research direction in Bayesian variational inference with discrete latent variable priors where we exploit Kronecker matrix algebra for efficient and exact computations of the evidence lower bound (ELBO).
1 code implementation • 9 Aug 2018 • Trefor W. Evans, Prasanth B. Nair
We propose two methods for exact Gaussian process (GP) inference and learning on massive image, video, spatial-temporal, or multi-output datasets with missing values (or "gaps") in the observed responses.
2 code implementations • ICML 2018 • Trefor W. Evans, Prasanth B. Nair
We introduce a kernel approximation strategy that enables computation of the Gaussian process log marginal likelihood and all hyperparameter derivatives in $\mathcal{O}(p)$ time.