Contextual Combinatorial Volatile Multi-armed Bandit with Adaptive Discretization

28 Aug 2020  ·  Andi Nika, Sepehr Elahi, Cem Tekin ·

We consider contextual combinatorial volatile multi-armed bandit (CCV-MAB), in which at each round, the learner observes a set of available base arms and their contexts, and then, selects a super arm that contains $K$ base arms in order to maximize its cumulative reward. Under the semi-bandit feedback setting and assuming that the contexts lie in a space ${\cal X}$ endowed with the Euclidean norm and that the expected base arm outcomes (expected rewards) are Lipschitz continuous in the contexts (expected base arm outcomes), we propose an algorithm called Adaptive Contextual Combinatorial Upper Confidence Bound (ACC-UCB). This algorithm, which adaptively discretizes ${\cal X}$ to form estimates of base arm outcomes and uses an $\alpha$-approximation oracle as a subroutine to select a super arm in each round, achieves $\tilde{O} ( T^{(\bar{D}+1)/(\bar{D}+2) + \epsilon} )$ regret for any $\epsilon>0$, where $\bar{D}$ represents the approximate optimality dimension related to ${\cal X}$. This dimension captures both the benignness of the base arm arrivals and the structure of the expected reward. In addition, we provide a recipe for obtaining more optimistic regret bounds by taking into account the volatility of the base arms and show that ACC-UCB achieves significant performance gains compared to the state-of-the-art for worker selection in mobile crowdsourcing.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here