no code implementations • 13 Dec 2023 • Nicolas Garcia Trillos, Bodhisattva Sen
We then prove that, under appropriate identifiability assumptions on the model, our OT-based denoiser can be recovered solely from information of the marginal distribution of $Z$ and the posterior mean of the model, after solving a linear relaxation problem over a suitable space of couplings that is reminiscent of a standard multimarginal OT (MOT) problem.
no code implementations • 28 Sep 2023 • Sumit Mukherjee, Bodhisattva Sen, Subhabrata Sen
We study empirical Bayes estimation in high-dimensional linear regression.
no code implementations • 10 Jan 2022 • Martin Slawski, Bodhisattva Sen
We study permutation recovery in the permuted regression setting and develop a computationally efficient and easy-to-use algorithm for denoising based on the Kiefer-Wolfowitz [Ann.
no code implementations • NeurIPS 2021 • Nabarun Deb, Promit Ghosal, Bodhisattva Sen
We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discrete-discrete and semi-discrete estimators of optimal transport maps.
no code implementations • 3 Jun 2020 • Gil Kur, Fuchang Gao, Adityanand Guntuboyina, Bodhisattva Sen
The least squares estimator (LSE) is shown to be suboptimal in squared error loss in the usual nonparametric regression model with Gaussian errors for $d \geq 5$ for each of the following families of functions: (i) convex functions supported on a polytope (in fixed design), (ii) bounded convex functions supported on a polytope (in random design), and (iii) convex Lipschitz functions supported on any convex domain (in random design).
1 code implementation • 14 May 2019 • Promit Ghosal, Bodhisattva Sen
Under mild structural assumptions, we provide global and local rates of convergence of the empirical quantile and rank maps.
Statistics Theory Probability Statistics Theory 62G30, 62G20, 60F15, 35J96
no code implementations • 4 Mar 2019 • Billy Fang, Adityanand Guntuboyina, Bodhisattva Sen
We show that the finite sample risk of these LSEs is always bounded from above by $n^{-2/3}$ modulo logarithmic factors depending on $d$; thus these nonparametric LSEs avoid the curse of dimensionality to some extent.
1 code implementation • 18 Oct 2018 • Nabarun Deb, Sujayam Saha, Adityanand Guntuboyina, Bodhisattva Sen
We propose a tuning parameter-free nonparametric maximum likelihood approach, implementable via the EM algorithm, to estimate the unknown parameters.
Methodology
no code implementations • 17 Sep 2017 • Adityanand Guntuboyina, Bodhisattva Sen
We consider the problem of nonparametric regression under shape constraints.