no code implementations • NeurIPS 2018 • Kevin G. Jamieson, Lalit Jain
We propose a new adaptive sampling approach to multiple testing which aims to maximize statistical power while ensuring anytime false discovery control.
no code implementations • NeurIPS 2016 • Kevin G. Jamieson, Daniel Haas, Benjamin Recht
This paper studies the trade-off between two different kinds of pure exploration: breadth versus depth.
no code implementations • NeurIPS 2015 • Kevin G. Jamieson, Lalit Jain, Chris Fernandez, Nicholas J. Glattard, Rob Nowak
Active learning methods automatically adapt data collection by selecting the most informative samples in order to accelerate machine learning.
no code implementations • NeurIPS 2012 • Kevin G. Jamieson, Robert Nowak, Ben Recht
Moreover, if the function evaluations are noisy, then approximating gradients by finite differences is difficult.
no code implementations • NeurIPS 2011 • Kevin G. Jamieson, Robert D. Nowak
We show that under this assumption the number of possible rankings grows like $n^{2d}$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than $d log n$ adaptively selected pairwise comparisons, on average.