no code implementations • 16 Feb 2024 • Yuyang Deng, Mingda Qiao
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions, while minimizing the number of samples drawn from them in total.
no code implementations • 12 Feb 2024 • Mingda Qiao, Letian Zheng
We then show that an $O(\sqrt{T})$ lower calibration distance can be achieved via a simple minimax argument and a reduction to online learning on a Lipschitz class.
no code implementations • 28 Nov 2023 • Weihao Kong, Mingda Qiao, Rajat Sen
We study the problem of recovering Gaussian data under adversarial corruptions when the noises are low-rank and the corruptions are on the coordinate level.
no code implementations • 5 Oct 2022 • Mingda Qiao, Guru Guruganesh, Ankit Singh Rawat, Avinava Dubey, Manzil Zaheer
Regev and Vijayaraghavan (2017) showed that with $\Delta = \Omega(\sqrt{\log k})$ separation, the means can be learned using $\mathrm{poly}(k, d)$ samples, whereas super-polynomially many samples are required if $\Delta = o(\sqrt{\log k})$ and $d = \Omega(\log k)$.
no code implementations • 29 Jun 2022 • Guy Blanc, Jane Lange, Mingda Qiao, Li-Yang Tan
The previous fastest algorithm for this problem ran in $n^{O(\log n)}$ time, a consequence of Ehrenfeucht and Haussler (1989)'s classic algorithm for the distribution-free setting.
no code implementations • 1 Sep 2021 • Guy Blanc, Jane Lange, Mingda Qiao, Li-Yang Tan
We give an $n^{O(\log\log n)}$-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over $\{\pm 1\}^n$.
no code implementations • 2 Jul 2021 • Guy Blanc, Jane Lange, Mingda Qiao, Li-Yang Tan
Greedy decision tree learning heuristics are mainstays of machine learning practice, but theoretical justification for their empirical success remains elusive.
no code implementations • 29 Jun 2021 • Mingda Qiao, Gregory Valiant
We study the selective learning problem introduced by Qiao and Valiant (2019), in which the learner observes $n$ labeled data points one at a time.
no code implementations • 7 Dec 2020 • Mingda Qiao, Gregory Valiant
In this paper, we prove an $\Omega(T^{0. 528})$ bound on the calibration error, which is the first super-$\sqrt{T}$ lower bound for this setting to the best of our knowledge.
no code implementations • 12 Feb 2019 • Mingda Qiao, Gregory Valiant
The algorithm is allowed to choose when to make the prediction as well as the length of the prediction window, possibly depending on the observations so far.
no code implementations • ICLR 2020 • Jian Li, Xuanyuan Luo, Mingda Qiao
We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds.
no code implementations • ICML 2018 • Mingda Qiao
We consider the problem of learning a binary classifier from $n$ different data sources, among which at most an $\eta$ fraction are adversarial.
no code implementations • NeurIPS 2017 • Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao
We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept.
no code implementations • 22 Nov 2017 • Mingda Qiao, Gregory Valiant
Specifically, we consider the setting where there is some underlying distribution, $p$, and each data source provides a batch of $\ge k$ samples, with the guarantee that at least a $(1-\epsilon)$ fraction of the sources draw their samples from a distribution with total variation distance at most $\eta$ from $p$.
no code implementations • 4 Jun 2017 • Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao, Ruosong Wang
We provide a novel instance-wise lower bound for the sample complexity of the problem, as well as a nontrivial sampling algorithm, matching the lower bound up to a factor of $\ln|\mathcal{F}|$.
no code implementations • 19 May 2017 • Haotian Jiang, Jian Li, Mingda Qiao
In the Best-$K$ identification problem (Best-$K$-Arm), we are given $N$ stochastic bandit arms with unknown reward distributions.
no code implementations • 13 Feb 2017 • Lijie Chen, Jian Li, Mingda Qiao
In the Best-$k$-Arm problem, we are given $n$ stochastic bandit arms, each associated with an unknown reward distribution.
no code implementations • 22 Aug 2016 • Lijie Chen, Jian Li, Mingda Qiao
$H(I)=\sum_{i=2}^n\Delta_{[i]}^{-2}$ is the complexity of the instance.