no code implementations • 12 Feb 2024 • Mengqi Lou, Guy Bresler, Ashwin Pananjady
We study the problem of approximately transforming a sample from a source statistical model to a sample from a target statistical model without knowing the parameters of the source model, and construct several computationally efficient such reductions between statistical experiments.
no code implementations • NeurIPS 2021 • Emmanuel Abbe, Enric Boix-Adsera, Matthew Brennan, Guy Bresler, Dheeraj Nagaraj
This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically.
no code implementations • 7 Jun 2021 • Enric Boix-Adsera, Guy Bresler, Frederic Koehler
In this paper, we introduce a new algorithm that carefully combines elements of the Chow-Liu algorithm with tree metric reconstruction methods to efficiently and optimally learn tree Ising models under a prediction-centric loss.
no code implementations • 3 Jun 2021 • Guy Bresler, Brice Huang
We prove that the class of low degree polynomial algorithms cannot find a satisfying assignment at clause density $(1 + o_k(1)) \kappa^* 2^k \log k / k$ for a universal constant $\kappa^* \approx 4. 911$.
no code implementations • 29 Mar 2021 • Nir Weinberger, Guy Bresler
For the empirical iteration based on $n$ samples, we show that when initialized at $\theta_{0}=0$, the EM algorithm adaptively achieves the minimax error rate $\tilde{O}\Big(\min\Big\{\frac{1}{(1-2\delta_{*})}\sqrt{\frac{d}{n}},\frac{1}{\|\theta_{*}\|}\sqrt{\frac{d}{n}},\left(\frac{d}{n}\right)^{1/4}\Big\}\Big)$ in no more than $O\Big(\frac{1}{\|\theta_{*}\|(1-2\delta_{*})}\Big)$ iterations (with high probability).
no code implementations • 13 Sep 2020 • Matthew Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm
Researchers currently use a number of approaches to predict and substantiate information-computation gaps in high-dimensional statistical estimation problems.
no code implementations • NeurIPS 2020 • Guy Bresler, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Xian Wu
Our improved rate serves as one of the first results where an algorithm outperforms SGD-DD on an interesting Markov chain and also provides one of the first theoretical analyses to support the use of experience replay in practice.
no code implementations • NeurIPS 2020 • Guy Bresler, Rares-Darius Buhai
In this paper, we give an algorithm for learning general RBMs with time complexity $\tilde{O}(n^{2^s+1})$, where $s$ is the maximum number of latent variables connected to the MRF neighborhood of an observed variable.
no code implementations • NeurIPS 2020 • Guy Bresler, Dheeraj Nagaraj
For each $D$, $\mathcal{G}_{D} \subseteq \mathcal{G}_{D+1}$ and as $D$ grows the class of functions $\mathcal{G}_{D}$ contains progressively less smooth functions.
no code implementations • 16 May 2020 • Matthew Brennan, Guy Bresler
Inference problems with conjectured statistical-computational gaps are ubiquitous throughout modern statistics, computer science and statistical physics.
no code implementations • 1 Feb 2020 • Guy Bresler, Dheeraj Nagaraj
This technique yields several new representation and learning results for neural networks.
no code implementations • NeurIPS 2019 • Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix Adsera, Guy Bresler
We consider the problem of experimental design for learning causal graphs that have a tree structure.
no code implementations • 8 Aug 2019 • Matthew Brennan, Guy Bresler
This paper develops several average-case reduction techniques to show new hardness results for three central high-dimensional statistics problems, implying a statistical-computational gap induced by robustness, a detection-recovery gap and a universality principle for these gaps.
no code implementations • 20 Feb 2019 • Matthew Brennan, Guy Bresler
We also show the surprising result that weaker forms of the PC conjecture up to clique size $K = o(N^\alpha)$ for any given $\alpha \in (0, 1/2]$ imply tight computational lower bounds for sparse PCA at sparsities $k = o(n^{\alpha/3})$.
no code implementations • 19 Feb 2019 • Matthew Brennan, Guy Bresler, Wasim Huleihel
In the general submatrix detection problem, the task is to detect the presence of a small $k \times k$ submatrix with entries sampled from a distribution $\mathcal{P}$ in an $n \times n$ matrix of samples from $\mathcal{Q}$.
no code implementations • NeurIPS 2018 • Guy Bresler, Sung Min Park, Madalina Persu
Sparse Principal Component Analysis (SPCA) and Sparse Linear Regression (SLR) have a wide range of applications and have attracted a tremendous amount of attention in the last two decades as canonical examples of statistical problems in high dimension.
no code implementations • 25 May 2018 • Guy Bresler, Frederic Koehler, Ankur Moitra, Elchanan Mossel
This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF.
no code implementations • 8 May 2018 • Ziv Goldfeld, Guy Bresler, Yury Polyanskiy
We first show that at zero temperature, order of $\sqrt{n}$ bits can be stored in the system indefinitely by coding over stable, striped configurations.
Information Theory Statistical Mechanics Information Theory
no code implementations • 17 Feb 2018 • Guy Bresler, Dheeraj Nagaraj
We develop a new approach that applies to both the Ising and Exponential Random Graph settings based on a general and natural statistical test.
no code implementations • 6 Nov 2017 • Guy Bresler, Mina Karzand
We assume that the matrix encoding the preferences of each user type for each item type is randomly generated; in this way, the model captures structure in both the item and user spaces, the amount of structure depending on the number of each of the types.
no code implementations • 22 Apr 2016 • Guy Bresler, Mina Karzand
We study the problem of learning a tree Ising model from samples such that subsequent predictions made using the model are accurate.
no code implementations • 20 Jul 2015 • Guy Bresler, Devavrat Shah, Luis F. Voloch
There is much empirical evidence that item-item collaborative filtering works well in practice.
no code implementations • NeurIPS 2014 • Guy Bresler, David Gamarnik, Devavrat Shah
In this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i. i. d.
no code implementations • 22 Nov 2014 • Guy Bresler
In this paper we show that a simple greedy procedure allows to learn the structure of an Ising model on an arbitrary bounded-degree graph in time on the order of $p^2$.
no code implementations • NeurIPS 2014 • Guy Bresler, George H. Chen, Devavrat Shah
Despite the prevalence of collaborative filtering in recommendation systems, there has been little theoretical development on why and how well it works, especially in the "online" setting, where items are recommended to users over time.
no code implementations • 28 Oct 2014 • Guy Bresler, David Gamarnik, Devavrat Shah
In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics.
no code implementations • NeurIPS 2014 • Guy Bresler, David Gamarnik, Devavrat Shah
Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters.