1 code implementation • 31 Oct 2023 • Jihao Andreas Lin, Shreyas Padhy, Javier Antorán, Austin Tripp, Alexander Terenin, Csaba Szepesvári, José Miguel Hernández-Lobato, David Janz
We study the use of stochastic gradient descent for solving this linear system, and show that when \emph{done right} -- by which we mean using specific insights from the optimisation and kernel communities -- stochastic gradient descent is highly effective.
no code implementations • NeurIPS 2023 • Andreas Östling, Holli Sargeant, Huiyuan Xie, Ludwig Bull, Alexander Terenin, Leif Jonsson, Måns Magnusson, Felix Steffek
We introduce the Cambridge Law Corpus (CLC), a dataset for legal AI research.
1 code implementation • NeurIPS 2023 • Paul Rosa, Viacheslav Borovitskiy, Alexander Terenin, Judith Rousseau
Gaussian processes are used in many machine learning applications that rely on uncertainty quantification.
1 code implementation • 2 Sep 2023 • Lucas Cosier, Rares Iordan, Sicelukwanda Zwane, Giovanni Franzese, James T. Wilson, Marc Peter Deisenroth, Alexander Terenin, Yasemin Bekiroglu
To control how a robot moves, motion planning algorithms must compute paths in high-dimensional state spaces while accounting for physical constraints related to motors and joints, generating smooth and stable motions, avoiding obstacles, and preventing collisions.
1 code implementation • NeurIPS 2023 • Jihao Andreas Lin, Javier Antorán, Shreyas Padhy, David Janz, José Miguel Hernández-Lobato, Alexander Terenin
Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems.
1 code implementation • 30 Jan 2023 • Iskander Azangulov, Andrei Smolensky, Alexander Terenin, Viacheslav Borovitskiy
The invariance of a Gaussian process' covariance to such symmetries gives rise to the most natural generalization of the concept of stationarity to such spaces.
1 code implementation • 14 Oct 2022 • Alexander Terenin, David R. Burt, Artem Artemev, Seth Flaxman, Mark van der Wilk, Carl Edward Rasmussen, Hong Ge
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
1 code implementation • 31 Aug 2022 • Iskander Azangulov, Andrei Smolensky, Alexander Terenin, Viacheslav Borovitskiy
The invariance of a Gaussian process' covariance to such symmetries gives rise to the most natural generalization of the concept of stationarity to such spaces.
no code implementations • 22 Feb 2022 • Alexander Terenin
In this dissertation, we develop techniques for broadening the applicability of Gaussian processes.
1 code implementation • 2 Nov 2021 • Noémie Jaquier, Viacheslav Borovitskiy, Andrei Smolensky, Alexander Terenin, Tamim Asfour, Leonel Rozo
Bayesian optimization is a data-efficient technique which can be used for control parameter tuning, parametric policy adaptation, and structure design in robotics.
no code implementations • NeurIPS 2021 • Michael Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Whye Teh, Marc Peter Deisenroth
Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems.
1 code implementation • 22 Feb 2021 • Andreas Hochlehnert, Alexander Terenin, Steindór Sæmundsson, Marc Peter Deisenroth
Learning physically structured representations of dynamical systems that include contact between different objects is an important problem for learning-based approaches in robotics.
no code implementations • 14 Feb 2021 • samuel cohen, Alexander Terenin, Yannik Pitcan, Brandon Amos, Marc Peter Deisenroth, K S Sesh Kumar
To construct this distance, we introduce a characterization of the one-dimensional multi-marginal Kantorovich problem and use it to highlight a number of properties of the sliced multi-marginal Wasserstein distance.
2 code implementations • 8 Nov 2020 • James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth
As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer.
no code implementations • 29 Oct 2020 • Viacheslav Borovitskiy, Iskander Azangulov, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth, Nicolas Durrande
Gaussian processes are a versatile framework for learning unknown functions in a manner that permits one to utilize prior information about their properties.
1 code implementation • 22 Jun 2020 • Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth
Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces.
1 code implementation • NeurIPS 2020 • Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth
Gaussian processes are an effective model class for learning unknown functions, particularly in settings where accurately representing predictive uncertainty is of key importance.
5 code implementations • ICML 2020 • James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth
Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model's success hinges upon its ability to faithfully represent predictive uncertainty.
1 code implementation • 21 Oct 2019 • Steindor Saemundsson, Alexander Terenin, Katja Hofmann, Marc Peter Deisenroth
Learning workable representations of dynamical systems is becoming an increasingly important problem in a number of application areas.
1 code implementation • EMNLP 2020 • Alexander Terenin, Måns Magnusson, Leif Jonsson
To scale non-parametric extensions of probabilistic topic models such as Latent Dirichlet allocation to larger data sets, practitioners rely increasingly on parallel and distributed systems.
no code implementations • 17 Nov 2017 • Alexander Terenin, Eric P. Xing
Markov Chain Monte Carlo (MCMC) methods such as Gibbs sampling are finding widespread use in applied statistics and machine learning.
1 code implementation • 12 Apr 2017 • Alexander Terenin, Måns Magnusson, Leif Jonsson, David Draper
We conclude by comparing the performance of our algorithm with that of other approaches on well-known corpora.
1 code implementation • 15 Aug 2016 • Alexander Terenin, Shawfeng Dong, David Draper
Gibbs sampling is a widely used Markov chain Monte Carlo (MCMC) method for numerically approximating integrals of interest in Bayesian statistics and other mathematical sciences.
Computation Distributed, Parallel, and Cluster Computing
no code implementations • 30 Sep 2015 • Alexander Terenin, Daniel Simpson, David Draper
We introduce a theoretical framework for analyzing asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property.
Computation