Paper

Pure Exploration with Structured Preference Feedback

We consider the problem of pure exploration with subset-wise preference feedback, which contains $N$ arms with features. The learner is allowed to query subsets of size $K$ and receives feedback in the form of a noisy winner. The goal of the learner is to identify the best arm efficiently using as few queries as possible. This setting is relevant in various online decision-making scenarios involving human feedback such as online retailing, streaming services, news feed, and online advertising; since it is easier and more reliable for people to choose a preferred item from a subset than to assign a likability score to an item in isolation. To the best of our knowledge, this is the first work that considers the subset-wise preference feedback model in a structured setting, which allows for potentially infinite set of arms. We present two algorithms that guarantee the detection of the best-arm in $\tilde{O} (\frac{d^2}{K \Delta^2})$ samples with probability at least $1 - \delta$, where $d$ is the dimension of the arm-features and $\Delta$ is the appropriate notion of utility gap among the arms. We also derive an instance-dependent lower bound of $\Omega(\frac{d}{\Delta^2} \log \frac{1}{\delta})$ which matches our upper bound on a worst-case instance. Finally, we run extensive experiments to corroborate our theoretical findings, and observe that our adaptive algorithm stops and requires up to 12x fewer samples than a non-adaptive algorithm.

Results in Papers With Code
(↓ scroll down to see all results)