Search Results for author: David Draper

Found 8 papers, 2 papers with code

Annealing Double-Head: An Architecture for Online Calibration of Deep Neural Networks

no code implementations27 Dec 2022 Erdong Guo, David Draper, Maria De Iorio

Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world.

Decision Making

Neural Tangent Kernel of Matrix Product States: Convergence and Applications

no code implementations28 Nov 2021 Erdong Guo, David Draper

In this work, we study the Neural Tangent Kernel (NTK) of Matrix Product States (MPS) and the convergence of its NTK in the infinite bond dimensional limit.

regression

Representation Theorem for Matrix Product States

no code implementations15 Mar 2021 Erdong Guo, David Draper

We study the relation between MPS and neural networks and show that the MPS with a scale-invariant sigmoidal function is equivalent to a one-hidden-layer neural network equipped with a kernel function.

Infinitely Wide Tensor Networks as Gaussian Process

no code implementations7 Jan 2021 Erdong Guo, David Draper

It is known that by introducing appropriate prior to the weights of the neural networks, Gaussian Process can be obtained by taking the infinite-width limit of the Bayesian neural networks from a Bayesian perspective.

Tensor Networks

The Bayesian Method of Tensor Networks

no code implementations1 Jan 2021 Erdong Guo, David Draper

By Bayes rule, the external information (prior distribution) and the internal information (training data likelihood) are combined coherently, and the posterior distribution and the posterior predictive (marginal) distribution obtained by Bayes rule summarize the total information needed in the inference and prediction, respectively.

Tensor Networks

Pólya Urn Latent Dirichlet Allocation: a doubly sparse massively parallel sampler

1 code implementation12 Apr 2017 Alexander Terenin, Måns Magnusson, Leif Jonsson, David Draper

We conclude by comparing the performance of our algorithm with that of other approaches on well-known corpora.

Topic Models

GPU-accelerated Gibbs sampling: a case study of the Horseshoe Probit model

1 code implementation15 Aug 2016 Alexander Terenin, Shawfeng Dong, David Draper

Gibbs sampling is a widely used Markov chain Monte Carlo (MCMC) method for numerically approximating integrals of interest in Bayesian statistics and other mathematical sciences.

Computation Distributed, Parallel, and Cluster Computing

Asynchronous Gibbs Sampling

no code implementations30 Sep 2015 Alexander Terenin, Daniel Simpson, David Draper

We introduce a theoretical framework for analyzing asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property.

Computation

Cannot find the paper you are looking for? You can Submit a new open access paper.