Search Results for author: John Wright

Found 39 papers, 13 papers with code

TpopT: Efficient Trainable Template Optimization on Low-Dimensional Manifolds

no code implementations16 Oct 2023 Jingkai Yan, Shiyu Wang, Xinyu Rain Wei, Jimmy Wang, Zsuzsanna Márka, Szabolcs Márka, John Wright

In this work, we study TpopT (TemPlate OPTimization) as an alternative scalable framework for detecting low-dimensional families of signals which maintains high interpretability.

Computational Efficiency Gravitational Wave Detection +1

Boosting the Efficiency of Parametric Detection with Hierarchical Neural Networks

no code implementations23 Jul 2022 Jingkai Yan, Robert Colgan, John Wright, Zsuzsa Márka, Imre Bartos, Szabolcs Márka

Various approaches have been proposed for improving the efficiency of the detection scheme, with hierarchical matched filtering being an important strategy.

Astronomy

Resource-Efficient Invariant Networks: Exponential Gains by Unrolled Optimization

1 code implementation9 Mar 2022 Sam Buchanan, Jingkai Yan, Ellie Haber, John Wright

Achieving invariance to nuisance transformations is a fundamental challenge in the construction of robust and reliable vision systems.

object-detection Object Detection

Deep Networks Provably Classify Data on Curves

no code implementations NeurIPS 2021 Tingran Wang, Sam Buchanan, Dar Gilboa, John Wright

Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems.

Binary Classification

Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix Recovery

no code implementations NeurIPS 2021 Junhui Zhang, Jingkai Yan, John Wright

We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level.

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

2 code implementations21 May 2021 Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma

This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation.

Data Compression

Generalized Approach to Matched Filtering using Neural Networks

no code implementations8 Apr 2021 Jingkai Yan, Mariam Avagyan, Robert E. Colgan, Doğa Veske, Imre Bartos, John Wright, Zsuzsa Márka, Szabolcs Márka

Moreover, we show that the proposed neural network architecture can outperform matched filtering, both with or without knowledge of a prior on the parameter distribution.

Gravitational Wave Detection

Deep Networks from the Principle of Rate Reduction

3 code implementations27 Oct 2020 Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma

The layered architectures, linear and nonlinear operators, and even parameters of the network are all explicitly constructed layer-by-layer in a forward propagation fashion by emulating the gradient scheme.

Deep Networks and the Multiple Manifold Problem

no code implementations ICLR 2021 Sam Buchanan, Dar Gilboa, John Wright

Our analysis demonstrates concrete benefits of depth and width in the context of a practically-motivated model problem: the depth acts as a fitting resource, with larger depths corresponding to smoother networks that can more readily separate the class manifolds, and the width acts as a statistical resource, enabling concentration of the randomly-initialized network and its gradients.

Binary Classification

From Symmetry to Geometry: Tractable Nonconvex Problems

no code implementations14 Jul 2020 Yuqian Zhang, Qing Qu, John Wright

We highlight the key role of symmetry in shaping the objective landscape and discuss the different roles of rotational and discrete symmetries.

Short and Sparse Deconvolution --- A Geometric Approach

1 code implementation ICLR 2020 Yenson Lau, Qing Qu, Han-Wen Kuo, Pengcheng Zhou, Yuqian Zhang, John Wright

Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure.

Deblurring Image Deblurring +1

Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications

no code implementations20 Jan 2020 Qing Qu, Zhihui Zhu, Xiao Li, Manolis C. Tsakiris, John Wright, René Vidal

The problem of finding the sparsest vector (direction) in a low dimensional subspace can be considered as a homogeneous variant of the sparse recovery problem, which finds applications in robust subspace recovery, dictionary learning, sparse blind deconvolution, and many other problems in signal processing and machine learning.

Dictionary Learning Representation Learning

Short-and-Sparse Deconvolution -- A Geometric Approach

1 code implementation28 Aug 2019 Yenson Lau, Qing Qu, Han-Wen Kuo, Pengcheng Zhou, Yuqian Zhang, John Wright

This paper is motivated by recent theoretical advances, which characterize the optimization landscape of a particular nonconvex formulation of SaSD.

Deblurring Image Deblurring +1

Complete Dictionary Learning via $\ell^4$-Norm Maximization over the Orthogonal Group

no code implementations6 Jun 2019 Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma

Most existing methods solve the dictionary (and sparse representations) based on heuristic algorithms, usually without theoretical guarantees for either optimality or complexity.

Dictionary Learning

On the Global Geometry of Sphere-Constrained Sparse Blind Deconvolution

no code implementations CVPR 2017 Yuqian Zhang, Yenson Lau, Han-Wen Kuo, Sky Cheung, Abhay Pasupathy, John Wright

Blind deconvolution is the problem of recovering a convolutional kernel $\boldsymbol a_0$ and an activation signal $\boldsymbol x_0$ from their convolution $\boldsymbol y = \boldsymbol a_0 \circledast \boldsymbol x_0$.

Deblurring Dictionary Learning +1

Geometry and Symmetry in Short-and-Sparse Deconvolution

no code implementations2 Jan 2019 Han-Wen Kuo, Yenson Lau, Yuqian Zhang, John Wright

We study the $\textit{Short-and-Sparse (SaS) deconvolution}$ problem of recovering a short signal $\mathbf a_0$ and a sparse signal $\mathbf x_0$ from their convolution.

Structured Local Minima in Sparse Blind Deconvolution

no code implementations NeurIPS 2018 Yuqian Zhang, Han-Wen Kuo, John Wright

We assume the short signal to have unit $\ell^2$ norm and cast the blind deconvolution problem as a nonconvex optimization problem over the sphere.

Structured Local Optima in Sparse Blind Deconvolution

no code implementations1 Jun 2018 Yuqian Zhang, Han-Wen Kuo, John Wright

We assume the short signal to have unit $\ell^2$ norm and cast the blind deconvolution problem as a nonconvex optimization problem over the sphere.

Convolutional Phase Retrieval via Gradient Descent

no code implementations3 Dec 2017 Qing Qu, Yuqian Zhang, Yonina C. Eldar, John Wright

We study the convolutional phase retrieval problem, of recovering an unknown signal $\mathbf x \in \mathbb C^n $ from $m$ measurements consisting of the magnitude of its cyclic convolution with a given kernel $\mathbf a \in \mathbb C^m $.

Retrieval

Convolutional Phase Retrieval

no code implementations NeurIPS 2017 Qing Qu, Yuqian Zhang, Yonina Eldar, John Wright

We study the convolutional phase retrieval problem, which asks us to recover an unknown signal ${\mathbf x} $ of length $n$ from $m$ measurements consisting of the magnitude of its cyclic convolution with a known kernel $\mathbf a$ of length $m$.

Retrieval

Which Distribution Distances are Sublinearly Testable?

no code implementations31 Jul 2017 Constantinos Daskalakis, Gautam Kamath, John Wright

Given samples from an unknown distribution $p$ and a description of a distribution $q$, are $p$ and $q$ close or far?

A Geometric Analysis of Phase Retrieval

1 code implementation22 Feb 2016 Ju Sun, Qing Qu, John Wright

complex Gaussian) and the number of measurements is large enough ($m \ge C n \log^3 n$), with high probability, a natural least-squares formulation for GPR has the following benign geometric structure: (1) there are no spurious local minimizers, and all global minimizers are equal to the target signal $\mathbf x$, up to a global phase; and (2) the objective function has a negative curvature around each saddle point.

GPR Retrieval

Complete Dictionary Recovery over the Sphere II: Recovery by Riemannian Trust-region Method

no code implementations15 Nov 2015 Ju Sun, Qing Qu, John Wright

We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb{R}^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.

Dictionary Learning

Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture

no code implementations11 Nov 2015 Ju Sun, Qing Qu, John Wright

We give the first efficient algorithm that provably recovers $\mathbf A_0$ when $\mathbf X_0$ has $O(n)$ nonzeros per column, under suitable probability model for $\mathbf X_0$.

Dictionary Learning

When Are Nonconvex Problems Not Scary?

3 code implementations21 Oct 2015 Ju Sun, Qing Qu, John Wright

In this note, we focus on smooth nonconvex optimization problems that obey: (1) all local minimizers are also global; and (2) around any saddle point or local maximizer, the objective has a negative directional curvature.

Dictionary Learning Retrieval +1

Complete Dictionary Recovery over the Sphere

1 code implementation26 Apr 2015 Ju Sun, Qing Qu, John Wright

We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.

Dictionary Learning

A Batchwise Monotone Algorithm for Dictionary Learning

no code implementations31 Jan 2015 Huan Wang, John Wright, Daniel Spielman

Unlike the state-of-the-art dictionary learning algorithms which impose sparsity constraints on a sample-by-sample basis, we instead treat the samples as a batch, and impose the sparsity constraint on the whole.

Dictionary Learning

Finding a sparse vector in a subspace: Linear sparsity using alternating directions

1 code implementation NeurIPS 2014 Qing Qu, Ju Sun, John Wright

In this paper, we focus on a **planted sparse model** for the subspace: the target sparse vector is embedded in an otherwise random subspace.

Dictionary Learning

Scalable Robust Matrix Recovery: Frank-Wolfe Meets Proximal Methods

no code implementations29 Mar 2014 Cun Mu, Yuqian Zhang, John Wright, Donald Goldfarb

Recovering matrices from compressive and grossly corrupted observations is a fundamental problem in robust statistics, with rich applications in computer vision and machine learning.

Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

no code implementations22 Jul 2013 Cun Mu, Bo Huang, John Wright, Donald Goldfarb

The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor.

Learning with Partially Absorbing Random Walks

no code implementations NeurIPS 2012 Xiao-Ming Wu, Zhenguo Li, Anthony M. So, John Wright, Shih-Fu Chang

We prove that under proper absorption rates, a random walk starting from a set $\mathcal{S}$ of low conductance will be mostly absorbed in $\mathcal{S}$.

Efficient Point-to-Subspace Query in $\ell^1$ with Application to Robust Object Instance Recognition

no code implementations2 Aug 2012 Ju Sun, Yuqian Zhang, John Wright

Motivated by vision tasks such as robust face and object recognition, we consider the following general problem: given a collection of low-dimensional linear subspaces in a high-dimensional ambient (image) space, and a query point (image), efficiently determine the nearest subspace to the query in $\ell^1$ distance.

Object Recognition

Compressive Principal Component Pursuit

1 code implementation21 Feb 2012 John Wright, Arvind Ganesh, Kerui Min, Yi Ma

We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements.

Information Theory Information Theory

Sparsity and Robustness in Face Recognition

no code implementations3 Nov 2011 John Wright, Arvind Ganesh, Allen Yang, Zihan Zhou, Yi Ma

This report concerns the use of techniques for sparse signal representation and sparse error correction for automatic face recognition.

Face Recognition Robust Face Recognition

Stable Principal Component Pursuit

1 code implementation14 Jan 2010 Zihan Zhou, XiaoDong Li, John Wright, Emmanuel Candes, Yi Ma

We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors.

Information Theory Information Theory

Robust Principal Component Analysis?

3 code implementations18 Dec 2009 Emmanuel J. Candes, Xiao-Dong Li, Yi Ma, John Wright

This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted.

Information Theory Information Theory

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization

no code implementations NeurIPS 2009 John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, Yi Ma

Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis.

Cannot find the paper you are looking for? You can Submit a new open access paper.