no code implementations • ICML 2020 • Shubhanshu Shekhar, Tara Javidi, Mohammad Ghavamzadeh
We consider the problem of allocating a fixed budget of samples to a finite set of discrete distributions to learn them uniformly well (minimizing the maximum error) in terms of four common distance measures: $\ell_2^2$, $\ell_1$, $f$-divergence, and separation distance.
no code implementations • 30 Oct 2023 • Teodora Pandeva, Patrick Forré, Aaditya Ramdas, Shubhanshu Shekhar
We propose a general framework for constructing powerful, sequential hypothesis tests for a large class of nonparametric testing problems.
no code implementations • 2 Oct 2023 • Shubhanshu Shekhar, Aaditya Ramdas
Constructing nonasymptotic confidence intervals (CIs) for the mean of a univariate distribution from independent and identically distributed (i. i. d.)
no code implementations • 16 Sep 2023 • Shubhanshu Shekhar, Aaditya Ramdas
We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes.
no code implementations • 8 May 2023 • Shubhanshu Shekhar, Ziyu Xu, Zachary C. Lipton, Pierre J. Liang, Aaditya Ramdas
Next, we develop methods to improve the quality of CSs by incorporating side information about the unknown values associated with each item.
no code implementations • 6 Feb 2023 • Shubhanshu Shekhar, Aaditya Ramdas
We present a simple reduction from sequential estimation to sequential changepoint detection (SCD).
no code implementations • 18 Dec 2022 • Shubhanshu Shekhar, Ilmun Kim, Aaditya Ramdas
In nonparametric independence testing, we observe i. i. d.\ data $\{(X_i, Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$.
no code implementations • 27 Nov 2022 • Shubhanshu Shekhar, Ilmun Kim, Aaditya Ramdas
The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution.
no code implementations • 12 Mar 2022 • Shubhanshu Shekhar, Tara Javidi
We study the kernelized bandit problem, that involves designing an adaptive strategy for querying a noisy zeroth-order-oracle to efficiently learn about the optimizer of an unknown function $f$ with a norm bounded by $M<\infty$ in a Reproducing Kernel Hilbert Space~(RKHS) associated with a positive definite kernel $K$.
no code implementations • NeurIPS 2021 • Shubhanshu Shekhar, Greg Fields, Mohammad Ghavamzadeh, Tara Javidi
Machine learning models trained on uncurated datasets can often end up adversely affecting inputs belonging to underrepresented groups.
no code implementations • 11 May 2020 • Shubhanshu Shekhar, Tara Javidi
We aim to optimize a black-box function $f:\mathcal{X} \mapsto \mathbb{R}$ under the assumption that $f$ is H\"older smooth and has bounded norm in the RKHS associated with a given kernel $K$.
no code implementations • 6 Mar 2020 • Jean Tarbouriech, Shubhanshu Shekhar, Matteo Pirotta, Mohammad Ghavamzadeh, Alessandro Lazaric
Using a number of simple domains with heterogeneous noise in their transitions, we show that our heuristic-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime, while achieving similar asymptotic performance as that of the original algorithm.
no code implementations • 28 Oct 2019 • Shubhanshu Shekhar, Tara Javidi, Mohammad Ghavamzadeh
We consider the problem of allocating samples to a finite set of discrete distributions in order to learn them uniformly well in terms of four common distance measures: $\ell_2^2$, $\ell_1$, $f$-divergence, and separation distance.
no code implementations • 1 Jun 2019 • Shubhanshu Shekhar, Mohammad Ghavamzadeh, Tara Javidi
We construct and analyze active learning algorithms for the problem of binary classification with abstention.
no code implementations • 23 May 2019 • Shubhanshu Shekhar, Mohammad Ghavamzadeh, Tara Javidi
We then propose a plug-in classifier that employs unlabeled samples to decide the region of abstention and derive an upper-bound on the excess risk of our classifier under standard \emph{H\"older smoothness} and \emph{margin} assumptions.
no code implementations • 26 Feb 2019 • Shubhanshu Shekhar, Tara Javidi
In this paper, the problem of estimating the level set of a black-box function from noisy and expensive evaluation queries is considered.
no code implementations • 5 Dec 2017 • Shubhanshu Shekhar, Tara Javidi
In this paper, the problem of maximizing a black-box function $f:\mathcal{X} \to \mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process (GP) prior.