no code implementations • 16 Oct 2023 • Jingkai Yan, Shiyu Wang, Xinyu Rain Wei, Jimmy Wang, Zsuzsanna Márka, Szabolcs Márka, John Wright
In this work, we study TpopT (TemPlate OPTimization) as an alternative scalable framework for detecting low-dimensional families of signals which maintains high interpretability.
no code implementations • 23 Jul 2022 • Jingkai Yan, Robert Colgan, John Wright, Zsuzsa Márka, Imre Bartos, Szabolcs Márka
Various approaches have been proposed for improving the efficiency of the detection scheme, with hierarchical matched filtering being an important strategy.
1 code implementation • 9 Mar 2022 • Sam Buchanan, Jingkai Yan, Ellie Haber, John Wright
Achieving invariance to nuisance transformations is a fundamental challenge in the construction of robust and reliable vision systems.
no code implementations • NeurIPS 2021 • Tingran Wang, Sam Buchanan, Dar Gilboa, John Wright
Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems.
no code implementations • NeurIPS 2021 • Junhui Zhang, Jingkai Yan, John Wright
We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level.
2 code implementations • 21 May 2021 • Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma
This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation.
no code implementations • 8 Apr 2021 • Jingkai Yan, Mariam Avagyan, Robert E. Colgan, Doğa Veske, Imre Bartos, John Wright, Zsuzsa Márka, Szabolcs Márka
Moreover, we show that the proposed neural network architecture can outperform matched filtering, both with or without knowledge of a prior on the parameter distribution.
3 code implementations • 27 Oct 2020 • Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma
The layered architectures, linear and nonlinear operators, and even parameters of the network are all explicitly constructed layer-by-layer in a forward propagation fashion by emulating the gradient scheme.
no code implementations • ICLR 2021 • Sam Buchanan, Dar Gilboa, John Wright
Our analysis demonstrates concrete benefits of depth and width in the context of a practically-motivated model problem: the depth acts as a fitting resource, with larger depths corresponding to smoother networks that can more readily separate the class manifolds, and the width acts as a statistical resource, enabling concentration of the randomly-initialized network and its gradients.
no code implementations • 14 Jul 2020 • Yuqian Zhang, Qing Qu, John Wright
We highlight the key role of symmetry in shaping the objective landscape and discuss the different roles of rotational and discrete symmetries.
1 code implementation • ICLR 2020 • Yenson Lau, Qing Qu, Han-Wen Kuo, Pengcheng Zhou, Yuqian Zhang, John Wright
Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure.
no code implementations • 20 Jan 2020 • Qing Qu, Zhihui Zhu, Xiao Li, Manolis C. Tsakiris, John Wright, René Vidal
The problem of finding the sparsest vector (direction) in a low dimensional subspace can be considered as a homogeneous variant of the sparse recovery problem, which finds applications in robust subspace recovery, dictionary learning, sparse blind deconvolution, and many other problems in signal processing and machine learning.
5 code implementations • 22 Nov 2019 • Hasan Genc, Seah Kim, Alon Amid, Ameer Haj-Ali, Vighnesh Iyer, Pranav Prakash, Jerry Zhao, Daniel Grubb, Harrison Liew, Howard Mao, Albert Ou, Colin Schmidt, Samuel Steffl, John Wright, Ion Stoica, Jonathan Ragan-Kelley, Krste Asanovic, Borivoje Nikolic, Yakun Sophia Shao
DNN accelerators are often developed and evaluated in isolation without considering the cross-stack, system-level effects in real-world environments.
1 code implementation • 28 Aug 2019 • Yenson Lau, Qing Qu, Han-Wen Kuo, Pengcheng Zhou, Yuqian Zhang, John Wright
This paper is motivated by recent theoretical advances, which characterize the optimization landscape of a particular nonconvex formulation of SaSD.
no code implementations • 6 Jun 2019 • Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma
Most existing methods solve the dictionary (and sparse representations) based on heuristic algorithms, usually without theoretical guarantees for either optimality or complexity.
no code implementations • CVPR 2017 • Yuqian Zhang, Yenson Lau, Han-Wen Kuo, Sky Cheung, Abhay Pasupathy, John Wright
Blind deconvolution is the problem of recovering a convolutional kernel $\boldsymbol a_0$ and an activation signal $\boldsymbol x_0$ from their convolution $\boldsymbol y = \boldsymbol a_0 \circledast \boldsymbol x_0$.
no code implementations • 2 Jan 2019 • Han-Wen Kuo, Yenson Lau, Yuqian Zhang, John Wright
We study the $\textit{Short-and-Sparse (SaS) deconvolution}$ problem of recovering a short signal $\mathbf a_0$ and a sparse signal $\mathbf x_0$ from their convolution.
no code implementations • NeurIPS 2018 • Yuqian Zhang, Han-Wen Kuo, John Wright
We assume the short signal to have unit $\ell^2$ norm and cast the blind deconvolution problem as a nonconvex optimization problem over the sphere.
no code implementations • 1 Jun 2018 • Yuqian Zhang, Han-Wen Kuo, John Wright
We assume the short signal to have unit $\ell^2$ norm and cast the blind deconvolution problem as a nonconvex optimization problem over the sphere.
no code implementations • 3 Dec 2017 • Qing Qu, Yuqian Zhang, Yonina C. Eldar, John Wright
We study the convolutional phase retrieval problem, of recovering an unknown signal $\mathbf x \in \mathbb C^n $ from $m$ measurements consisting of the magnitude of its cyclic convolution with a given kernel $\mathbf a \in \mathbb C^m $.
no code implementations • NeurIPS 2017 • Qing Qu, Yuqian Zhang, Yonina Eldar, John Wright
We study the convolutional phase retrieval problem, which asks us to recover an unknown signal ${\mathbf x} $ of length $n$ from $m$ measurements consisting of the magnitude of its cyclic convolution with a known kernel $\mathbf a$ of length $m$.
no code implementations • 31 Jul 2017 • Constantinos Daskalakis, Gautam Kamath, John Wright
Given samples from an unknown distribution $p$ and a description of a distribution $q$, are $p$ and $q$ close or far?
1 code implementation • 22 Feb 2016 • Ju Sun, Qing Qu, John Wright
complex Gaussian) and the number of measurements is large enough ($m \ge C n \log^3 n$), with high probability, a natural least-squares formulation for GPR has the following benign geometric structure: (1) there are no spurious local minimizers, and all global minimizers are equal to the target signal $\mathbf x$, up to a global phase; and (2) the objective function has a negative curvature around each saddle point.
no code implementations • 15 Nov 2015 • Ju Sun, Qing Qu, John Wright
We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb{R}^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.
no code implementations • 11 Nov 2015 • Ju Sun, Qing Qu, John Wright
We give the first efficient algorithm that provably recovers $\mathbf A_0$ when $\mathbf X_0$ has $O(n)$ nonzeros per column, under suitable probability model for $\mathbf X_0$.
3 code implementations • 21 Oct 2015 • Ju Sun, Qing Qu, John Wright
In this note, we focus on smooth nonconvex optimization problems that obey: (1) all local minimizers are also global; and (2) around any saddle point or local maximizer, the objective has a negative directional curvature.
1 code implementation • 26 Apr 2015 • Ju Sun, Qing Qu, John Wright
We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.
no code implementations • 31 Jan 2015 • Huan Wang, John Wright, Daniel Spielman
Unlike the state-of-the-art dictionary learning algorithms which impose sparsity constraints on a sample-by-sample basis, we instead treat the samples as a batch, and impose the sparsity constraint on the whole.
1 code implementation • NeurIPS 2014 • Qing Qu, Ju Sun, John Wright
In this paper, we focus on a **planted sparse model** for the subspace: the target sparse vector is embedded in an otherwise random subspace.
no code implementations • 29 Mar 2014 • Cun Mu, Yuqian Zhang, John Wright, Donald Goldfarb
Recovering matrices from compressive and grossly corrupted observations is a fundamental problem in robust statistics, with rich applications in computer vision and machine learning.
no code implementations • 22 Jul 2013 • Cun Mu, Bo Huang, John Wright, Donald Goldfarb
The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor.
no code implementations • 4 Jul 2013 • Yuqian Zhang, Cun Mu, Han-Wen Kuo, John Wright
Illumination variation remains a central challenge in object detection and recognition.
no code implementations • NeurIPS 2012 • Xiao-Ming Wu, Zhenguo Li, Anthony M. So, John Wright, Shih-Fu Chang
We prove that under proper absorption rates, a random walk starting from a set $\mathcal{S}$ of low conductance will be mostly absorbed in $\mathcal{S}$.
no code implementations • 2 Aug 2012 • Ju Sun, Yuqian Zhang, John Wright
Motivated by vision tasks such as robust face and object recognition, we consider the following general problem: given a collection of low-dimensional linear subspaces in a high-dimensional ambient (image) space, and a query point (image), efficiently determine the nearest subspace to the query in $\ell^1$ distance.
1 code implementation • 21 Feb 2012 • John Wright, Arvind Ganesh, Kerui Min, Yi Ma
We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements.
Information Theory Information Theory
no code implementations • 3 Nov 2011 • John Wright, Arvind Ganesh, Allen Yang, Zihan Zhou, Yi Ma
This report concerns the use of techniques for sparse signal representation and sparse error correction for automatic face recognition.
1 code implementation • 14 Jan 2010 • Zihan Zhou, XiaoDong Li, John Wright, Emmanuel Candes, Yi Ma
We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors.
Information Theory Information Theory
3 code implementations • 18 Dec 2009 • Emmanuel J. Candes, Xiao-Dong Li, Yi Ma, John Wright
This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted.
Information Theory Information Theory
no code implementations • NeurIPS 2009 • John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, Yi Ma
Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis.