no code implementations • 5 Jun 2023 • David Gamarnik
In particular, critical commentaries~\cite{angelini2023modern} and~\cite{boettcher2023inability} point out that simple greedy algorithm performs better than GNN in the setting of random graphs, and in fact stronger algorithmic performance can be reached with more sophisticated methods.
no code implementations • 2 Mar 2021 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm.
no code implementations • 25 Apr 2020 • David Gamarnik, Aukosh Jagannath, Alexander S. Wein
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem).
no code implementations • 23 Mar 2020 • Matt Emschwiller, David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Thus a message implied by our results is that parametrizing wide neural networks by the number of hidden nodes is misleading, and a more fitting measure of parametrization complexity is the number of regression coefficients associated with tensorized data.
no code implementations • 3 Dec 2019 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Next, we show that initializing below this barrier is in fact easily achieved when the weights are randomly generated under relatively weak assumptions.
no code implementations • NeurIPS 2019 • David Gamarnik, Julia Gaudio
We consider the problem of estimating an unknown coordinate-wise monotone function given noisy measurements, known as the isotonic regression problem.
no code implementations • 24 Oct 2019 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$.
no code implementations • 15 Apr 2019 • David Gamarnik, Ilias Zadik
Using the first moment method, we study the densest subgraph problems for subgraphs with fixed, but arbitrary, overlap size with the planted clique, and provide evidence of a phase transition for the presence of Overlap Gap Property (OGP) at $k=\Theta\left(\sqrt{n}\right)$.
no code implementations • NeurIPS 2018 • David Gamarnik, Ilias Zadik
We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector $\beta^*$ from $n$ noisy linear observations $Y=X\beta^*+W \in \mathbb{R}^n$, for known $X \in \mathbb{R}^{n \times p}$ and unknown $W \in \mathbb{R}^n$.
no code implementations • 14 Nov 2017 • David Gamarnik, Ilias Zadik
The presence of such an Overlap Gap Property phase transition, which originates in statistical physics, is known to provide evidence of an algorithmic hardness.
no code implementations • 8 Feb 2017 • David Gamarnik, Quan Li, Hongyi Zhang
Under a certain incoherence assumption on $M$ and for the case when both the rank and the condition number of $M$ are bounded, it was shown in \cite{CandesRecht2009, CandesTao2010, keshavan2010, Recht2011, Jain2012, Hardt2014} that $M$ can be recovered exactly or approximately (depending on some trade-off between accuracy and computational complexity) using $O(n \, \text{poly}(\log n))$ samples in super-linear time $O(n^{a} \, \text{poly}(\log n))$ for some constant $a \geq 1$.
no code implementations • 16 Jan 2017 • David Gamarnik, Ilias Zadik
c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_{\beta}\|Y-X\beta\|_{2} in the regime n<n_{\text{LASSO/CS}}.
no code implementations • 18 Mar 2016 • Patrick Eschenfeldt, David Gamarnik
We consider the problem of packing node-disjoint directed paths in a directed graph.
no code implementations • 5 Feb 2016 • David Gamarnik, Sidhant Misra
We consider the problem of reconstructing a low rank matrix from a subset of its entries and analyze two variants of the so-called Alternating Minimization algorithm, which has been proposed in the past.
no code implementations • NeurIPS 2014 • Guy Bresler, David Gamarnik, Devavrat Shah
In this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i. i. d.
no code implementations • 28 Oct 2014 • Guy Bresler, David Gamarnik, Devavrat Shah
In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics.
no code implementations • NeurIPS 2014 • Guy Bresler, David Gamarnik, Devavrat Shah
Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters.
no code implementations • 1 Feb 2014 • David Gamarnik, Madhu Sudan
We show that the Survey Propagation-guided decimation algorithm fails to find satisfying assignments on random instances of the "Not-All-Equal-$K$-SAT" problem if the number of message passing iterations is bounded by a constant independent of the size of the instance and the clause-to-variable ratio is above $(1+o_K(1)){2^{K-1}\over K}\log^2 K$ for sufficiently large $K$.