Search Results for author: Daniel P. Robinson

Found 18 papers, 6 papers with code

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

no code implementations28 Apr 2023 Frank E. Curtis, Vyacheslav Kungurtsev, Daniel P. Robinson, Qi Wang

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results.

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

1 code implementation24 Jun 2021 Albert S. Berahas, Frank E. Curtis, Michael J. O'Neill, Daniel P. Robinson

A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function.

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

1 code implementation29 Jul 2020 Frank E. Curtis, Yutong Dai, Daniel P. Robinson

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer.

Optimization and Control 49M37, 65K05, 65K10, 65Y20, 68Q25, 90C30, 90C60

Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization

1 code implementation20 Jul 2020 Albert Berahas, Frank E. Curtis, Daniel P. Robinson, Baoyu Zhou

It is assumed in this setting that it is intractable to compute objective function and derivative values explicitly, although one can compute stochastic function and gradient estimates.

Stochastic Optimization

Self-Representation Based Unsupervised Exemplar Selection in a Union of Subspaces

no code implementations7 Jun 2020 Chong You, Chi Li, Daniel P. Robinson, Rene Vidal

When the dataset is drawn from a union of independent subspaces, our method is able to select sufficiently many representatives from each subspace.

Clustering

Is an Affine Constraint Needed for Affine Subspace Clustering?

no code implementations ICCV 2019 Chong You, Chun-Guang Li, Daniel P. Robinson, Rene Vidal

Specifically, our analysis provides conditions that guarantee the correctness of affine subspace clustering methods both with and without the affine constraint, and shows that these conditions are satisfied for high-dimensional data.

Clustering Face Clustering +1

Basis Pursuit and Orthogonal Matching Pursuit for Subspace-preserving Recovery: Theoretical Analysis

no code implementations30 Dec 2019 Daniel P. Robinson, Rene Vidal, Chong You

The goal is to have the representation $c$ correctly identify the subspace, i. e. the nonzero entries of $c$ should correspond to columns of $A$ that are in the subspace $\mathcal{S}_0$.

What is the Largest Sparsity Pattern that Can Be Recovered by 1-Norm Minimization?

no code implementations12 Oct 2019 Mustafa D. Kaba, Mengnan Zhao, Rene Vidal, Daniel P. Robinson, Enrique Mallada

In the case of the partial discrete Fourier transform, our characterization of the largest sparsity pattern that can be recovered requires the unknown signal to be real and its dimension to be a prime number.

Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization

no code implementations2 Aug 2019 Guilherme França, Daniel P. Robinson, René Vidal

We show that similar discretization schemes applied to Newton's equation with an additional dissipative force, which we refer to as accelerated gradient flow, allow us to obtain accelerated variants of all these proximal algorithms -- the majority of which are new although some recover known cases in the literature.

BIG-bench Machine Learning Distributed Optimization

Conformal Symplectic and Relativistic Optimization

1 code implementation NeurIPS 2020 Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal

Arguably, the two most popular accelerated or momentum-based optimization methods in machine learning are Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different discretizations of a particular second order differential equation with friction.

Friction

Dual Principal Component Pursuit: Probability Analysis and Efficient Algorithms

no code implementations24 Dec 2018 Zhihui Zhu, Yifan Wang, Daniel P. Robinson, Daniel Q. Naiman, Rene Vidal, Manolis C. Tsakiris

However, its geometric analysis is based on quantities that are difficult to interpret and are not amenable to statistical analysis.

Scalable Exemplar-based Subspace Clustering on Class-Imbalanced Data

no code implementations ECCV 2018 Chong You, Chi Li, Daniel P. Robinson, Rene Vidal

Our experiments demonstrate that the proposed method outperforms state-of-the-art subspace clustering methods in two large-scale image datasets that are imbalanced.

Clustering Image Classification

A Nonsmooth Dynamical Systems Perspective on Accelerated Extensions of ADMM

no code implementations13 Aug 2018 Guilherme França, Daniel P. Robinson, René Vidal

Recently, there has been great interest in connections between continuous-time dynamical systems and optimization methods, notably in the context of accelerated methods for smooth and unconstrained problems.

Provable Self-Representation Based Outlier Detection in a Union of Subspaces

no code implementations CVPR 2017 Chong You, Daniel P. Robinson, René Vidal

While outlier detection methods based on robust statistics have existed for decades, only recently have methods based on sparse and low-rank representation been developed along with guarantees of correct outlier detection when the inliers lie in one or more low-dimensional subspaces.

Outlier Detection

Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering

1 code implementation CVPR 2016 Chong You, Chun-Guang Li, Daniel P. Robinson, Rene Vidal

Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to $\ell_2$ regularization) and subspace-preserving (due to $\ell_1$ regularization) properties for elastic net subspace clustering.

Ranked #7 on Image Clustering on coil-100 (Accuracy metric)

Clustering Image Clustering

Trading-Off Cost of Deployment Versus Accuracy in Learning Predictive Models

no code implementations20 Apr 2016 Daniel P. Robinson, Suchi Saria

For the challenging real-world application of risk prediction for sepsis in intensive care units, the use of our regularizer leads to models that are in harmony with the underlying cost structure and thus provide an excellent prediction accuracy versus cost tradeoff.

Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit

2 code implementations CVPR 2016 Chong You, Daniel P. Robinson, Rene Vidal

Subspace clustering methods based on $\ell_1$, $\ell_2$ or nuclear norm regularization have become very popular due to their simplicity, theoretical guarantees and empirical success.

Clustering Face Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.