Generalizing Complex Hypotheses on Product Distributions: Auctions, Prophet Inequalities, and Pandora's Problem

27 Nov 2019  ·  Chenghao Guo, Zhiyi Huang, Zhihao Gavin Tang, Xinzhi Zhang ·

This paper explores a theory of generalization for learning problems on product distributions, complementing the existing learning theories in the sense that it does not rely on any complexity measures of the hypothesis classes. The main contributions are two general sample complexity bounds: (1) $\tilde{O} \big( \frac{nk}{\epsilon^2} \big)$ samples are sufficient and necessary for learning an $\epsilon$-optimal hypothesis in any problem on an $n$-dimensional product distribution, whose marginals have finite supports of sizes at most $k$; (2) $\tilde{O} \big( \frac{n}{\epsilon^2} \big)$ samples are sufficient and necessary for any problem on $n$-dimensional product distributions if it satisfies a notion of strong monotonicity from the algorithmic game theory literature. As applications of these theories, we match the optimal sample complexity for single-parameter revenue maximization (Guo et al., STOC 2019), improve the state-of-the-art for multi-parameter revenue maximization (Gonczarowski and Weinberg, FOCS 2018) and prophet inequality (Correa et al., EC 2019), and provide the first and tight sample complexity bound for Pandora's problem.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here