no code implementations • 24 Dec 2020 • Sander Borst, Daniel Dadush, Neil Olver, Makrand Sinha
In this paper, we return to majorizing measures as a primary object of study, and give a viewpoint that we think is natural and clarifying from an optimization perspective.
Gaussian Processes Probability Data Structures and Algorithms Optimization and Control 60G15, 68Q87 G.3
no code implementations • 15 Dec 2020 • Sander Borst, Daniel Dadush, Sophie Huiberts, Samarth Tiwari
For a binary integer program (IP) ${\rm max} ~ c^\mathsf{T} x, Ax \leq b, x \in \{0, 1\}^n$, where $A \in \mathbb{R}^{m \times n}$ and $c \in \mathbb{R}^n$ have independent Gaussian entries and the right-hand side $b \in \mathbb{R}^m$ satisfies that its negative coordinates have $\ell_2$ norm at most $n/10$, we prove that the gap between the value of the linear programming relaxation and the IP is upper bounded by $\operatorname{poly}(m)(\log n)^2 / n$ with probability at least $1-2/n^7-2^{-\operatorname{poly}(m)}$.
Optimization and Control Data Structures and Algorithms
1 code implementation • 3 Aug 2017 • Nikhil Bansal, Daniel Dadush, Shashwat Garg, Shachar Lovett
An important result in discrepancy due to Banaszczyk states that for any set of $n$ vectors in $\mathbb{R}^m$ of $\ell_2$ norm at most $1$ and any convex body $K$ in $\mathbb{R}^m$ of Gaussian measure at least half, there exists a $\pm 1$ combination of these vectors which lies in $5K$.
Data Structures and Algorithms Discrete Mathematics