no code implementations • 18 Apr 2024 • Lingxiao Li, Raaz Dwivedi, Lester Mackey
Modern compression methods can summarize a target distribution $\mathbb{P}$ more succinctly than i. i. d.
no code implementations • 9 Apr 2024 • Jane Dwivedi-Yu, Raaz Dwivedi, Timo Schick
We present an evaluation of several commonly used generative models and a qualitative analysis that indicates a preference for discussing family and hobbies with regard to women.
no code implementations • 18 Feb 2024 • Alberto Abadie, Anish Agarwal, Raaz Dwivedi, Abhin Shah
This article introduces a new estimator of average treatment effects under unobserved confounding in modern data-rich environments featuring large numbers of units and outcomes.
1 code implementation • 11 Apr 2023 • Susobhan Ghosh, Raphael Kim, Prasidh Chhabria, Raaz Dwivedi, Predrag Klasnja, Peng Liao, Kelly Zhang, Susan Murphy
We use a working definition of personalization and introduce a resampling-based methodology for investigating whether the personalization exhibited by the RL algorithm is an artifact of the RL algorithm stochasticity.
1 code implementation • 14 Jan 2023 • Carles Domingo-Enrich, Raaz Dwivedi, Lester Mackey
To address these shortcomings, we introduce Compress Then Test (CTT), a new framework for high-powered kernel testing based on sample compression.
no code implementations • 25 Nov 2022 • Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah
We consider a matrix completion problem with missing data, where the $(i, t)$-th entry, when observed, is given by its mean $f(u_i, v_t)$ plus mean-zero noise for an unknown function $f$ and latent factors $u_i$ and $v_t$.
no code implementations • 14 Nov 2022 • Abhin Shah, Raaz Dwivedi, Devavrat Shah, Gregory W. Wornell
Given an observational study with $n$ independent but heterogeneous units, our goal is to learn the counterfactual distribution for each unit using only one $p$-dimensional sample per unit containing covariates, interventions, and outcomes.
no code implementations • 14 Feb 2022 • Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah
Our goal is to provide inference guarantees for the counterfactual mean at the smallest possible scale -- mean outcome under different treatments for each unit and each time -- with minimal assumptions on the adaptive treatment policy.
1 code implementation • ICLR 2022 • Abhishek Shetty, Raaz Dwivedi, Lester Mackey
Near-optimal thinning procedures achieve this goal by sampling $n$ points from a Markov chain and identifying $\sqrt{n}$ points with $\widetilde{\mathcal{O}}(1/\sqrt{n})$ discrepancy to $\mathbb{P}$.
1 code implementation • ICLR 2022 • Raaz Dwivedi, Lester Mackey
Fourth, we establish that KT applied to a sum of the target and power kernels (a procedure we call KT+) simultaneously inherits the improved MMD guarantees of power KT and the tighter individual function guarantees of target KT.
1 code implementation • 12 May 2021 • Raaz Dwivedi, Lester Mackey
The maximum discrepancy in integration error is $\mathcal{O}_d(n^{-1/2}\sqrt{\log n})$ in probability for compactly supported $\mathbb{P}$ and $\mathcal{O}_d(n^{-\frac{1}{2}} (\log n)^{(d+1)/2}\sqrt{\log\log n})$ for sub-exponential $\mathbb{P}$ on $\mathbb{R}^d$.
1 code implementation • 23 Aug 2020 • Raaz Dwivedi, Yan Shuo Tan, Briton Park, Mian Wei, Kevin Horgan, David Madigan, Bin Yu
Building on Yu and Kumbier's PCS framework and for randomized experiments, we introduce a novel methodology for Stable Discovery of Interpretable Subgroups via Calibration (StaDISC), with large heterogeneous treatment effects.
1 code implementation • 17 Jun 2020 • Raaz Dwivedi, Chandan Singh, Bin Yu, Martin J. Wainwright
We provide an extensive theoretical characterization of MDL-COMP for linear models and kernel methods and show that it is not just a function of parameter count, but rather a function of the singular values of the design or the kernel matrix and the signal-to-noise ratio.
no code implementations • 22 May 2020 • Nhat Ho, Koulik Khamaru, Raaz Dwivedi, Martin J. Wainwright, Michael. I. Jordan, Bin Yu
Many statistical estimators are defined as the fixed point of a data-dependent operator, with estimators based on minimizing a cost function being an important special case.
1 code implementation • 16 May 2020 • Nick Altieri, Rebecca L. Barter, James Duncan, Raaz Dwivedi, Karl Kumbier, Xiao Li, Robert Netzorg, Briton Park, Chandan Singh, Yan Shuo Tan, Tiffany Tang, Yu Wang, Chao Zhang, Bin Yu
We use this data to develop predictions and corresponding prediction intervals for the short-term trajectory of COVID-19 cumulative death counts at the county-level in the United States up to two weeks ahead.
1 code implementation • 29 May 2019 • Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu
This bound gives a precise quantification of the faster convergence of Metropolized HMC relative to simpler MCMC algorithms such as the Metropolized random walk, or Metropolized Langevin algorithm.
no code implementations • 1 Feb 2019 • Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Martin J. Wainwright, Michael. I. Jordan, Bin Yu
We study a class of weakly identifiable location-scale mixture models for which the maximum likelihood estimates based on $n$ i. i. d.
no code implementations • NeurIPS 2018 • Raaz Dwivedi, Nhật Hồ, Koulik Khamaru, Martin J. Wainwright, Michael. I. Jordan
We provide two classes of theoretical guarantees: first, we characterize the bias introduced due to the misspecification; and second, we prove that population EM converges at a geometric rate to the model projection under a suitable initialization condition.
no code implementations • 1 Oct 2018 • Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Michael. I. Jordan, Martin J. Wainwright, Bin Yu
A line of recent work has analyzed the behavior of the Expectation-Maximization (EM) algorithm in the well-specified setting, in which the population likelihood is locally strongly concave around its maximizing argument.
1 code implementation • 8 Jan 2018 • Raaz Dwivedi, Yuansi Chen, Martin J. Wainwright, Bin Yu
Relative to known guarantees for the unadjusted Langevin algorithm (ULA), our bounds show that the use of an accept-reject step in MALA leads to an exponentially improved dependence on the error-tolerance.
2 code implementations • 23 Oct 2017 • Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu
We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope.