1 code implementation • 22 Feb 2024 • Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles C. Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul
We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.
1 code implementation • 13 Mar 2023 • Aishwarya Mandyam, Didong Li, Diana Cai, Andrew Jones, Barbara E. Engelhardt
Inverse reinforcement learning~(IRL) is a powerful framework to infer an agent's reward function by observing its behavior, but IRL algorithms that learn point estimates of the reward function can be misleading because there may be several functions that describe an agent's behavior equally well.
no code implementations • 4 Oct 2022 • Diana Cai, Ryan P. Adams
A key challenge in applying MCMC to scientific domains is computation: the target density of interest is often a function of expensive computations, such as a high-fidelity physical simulation, an intractable integral, or a slowly-converging iterative algorithm.
no code implementations • NeurIPS 2021 • David Zoltowski, Diana Cai, Ryan P. Adams
Slice sampling is a Markov chain Monte Carlo algorithm for simulating samples from probability distributions; it only requires a density function that can be evaluated point-wise up to a normalization constant, making it applicable to a variety of inference problems and unnormalized models.
no code implementations • pproximateinference AABI Symposium 2022 • Aishwarya Mandyam, Didong Li, Diana Cai, Andrew Jones, Barbara Engelhardt
Inverse reinforcement learning (IRL) methods attempt to recover the reward function of an agent by observing its behavior.
1 code implementation • 26 Mar 2021 • Gregory W. Gundersen, Diana Cai, Chuteng Zhou, Barbara E. Engelhardt, Ryan P. Adams
We propose a multi-fidelity approach that makes cost-sensitive decisions about which data fidelity to collect based on maximizing information gain with respect to changepoints.
no code implementations • NeurIPS Workshop ICBINB 2020 • Diana Cai, Trevor Campbell, Tamara Broderick
Increasingly, though, data science papers suggest potential alternatives beyond vanilla FMMs, such as power posteriors, coarsening, and related methods.
no code implementations • 8 Jul 2020 • Diana Cai, Trevor Campbell, Tamara Broderick
In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data.
no code implementations • 20 Mar 2020 • Diana Cai, Rishit Sheth, Lester Mackey, Nicolo Fusi
Meta-learning leverages related source tasks to learn an initialization that can be quickly fine-tuned to a target task with limited labeled examples.
no code implementations • NeurIPS 2018 • Diana Cai, Michael Mitzenmacher, Ryan P. Adams
The count-min sketch is a time- and memory-efficient randomized data structure that provides a point estimate of the number of times an item has appeared in a data stream.
no code implementations • 16 Dec 2016 • Diana Cai, Trevor Campbell, Tamara Broderick
Many popular network models rely on the assumption of (vertex) exchangeability, in which the distribution of the graph is invariant to relabelings of the vertices.
no code implementations • 22 Mar 2016 • Diana Cai, Tamara Broderick
Since individual network datasets continue to grow in size, it is necessary to develop models that accurately represent the real-life scaling properties of networks.
no code implementations • NeurIPS 2016 • Tamara Broderick, Diana Cai
We show that, unlike node exchangeability, edge exchangeability encompasses models that are known to provide a projective sequence of random graphs that circumvent the Aldous-Hoover Theorem and exhibit sparsity, i. e., sub-quadratic growth of the number of edges with the number of nodes.
no code implementations • 28 Oct 2015 • Diana Cai, Nathanael Ackerman, Cameron Freer
Directed graphs occur throughout statistical modeling of networks, and exchangeability is a natural assumption when the ordering of vertices does not matter.
no code implementations • 5 Dec 2014 • Diana Cai, Nathanael Ackerman, Cameron Freer
Exchangeable graphs arise via a sampling procedure from measurable functions known as graphons.