Search Results for author: Petros Drineas

Found 24 papers, 2 papers with code

Stochastic Rounding Implicitly Regularizes Tall-and-Thin Matrices

no code implementations18 Mar 2024 Gregory Dexter, Christos Boutsikas, Linkai Ma, Ilse C. F. Ipsen, Petros Drineas

Motivated by the popularity of stochastic rounding in the context of machine learning and the training of large-scale deep neural network models, we consider stochastic nearness rounding of real matrices $\mathbf{A}$ with many more rows than columns.

Feature Space Sketching for Logistic Regression

no code implementations24 Mar 2023 Gregory Dexter, Rajiv Khanna, Jawad Raheel, Petros Drineas

We present novel bounds for coreset construction, feature selection, and dimensionality reduction for logistic regression.

Dimensionality Reduction feature selection +1

Low-Rank Updates of Matrix Square Roots

no code implementations31 Jan 2022 Shany Shumeli, Petros Drineas, Haim Avron

Given a low rank perturbation to a matrix, we argue that a low-rank approximate correction to the (inverse) square root exists.

Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs

no code implementations NeurIPS 2020 Agniva Chowdhury, Palma London, Haim Avron, Petros Drineas

Linear programming (LP) is used in many machine learning applications, such as $\ell_1$-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc.

Approximation Algorithms for Sparse Principal Component Analysis

no code implementations23 Jun 2020 Agniva Chowdhury, Petros Drineas, David P. Woodruff, Samson Zhou

To improve the interpretability of PCA, various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis (SPCA).

Dimensionality Reduction

Randomized Iterative Algorithms for Fisher Discriminant Analysis

no code implementations9 Sep 2018 Agniva Chowdhury, Jiasen Yang, Petros Drineas

When the number of predictor variables greatly exceeds the number of observations, one of the alternatives for conventional FDA is regularized Fisher discriminant analysis (RFDA).

Dimensionality Reduction

An Iterative, Sketching-based Framework for Ridge Regression

no code implementations ICML 2018 Agniva Chowdhury, Jiasen Yang, Petros Drineas

Ridge regression is a variant of regularized least squares regression that is particularly suitable in settings where the number of predictor variables greatly exceeds the number of observations.

regression

Constructing Compact Brain Connectomes for Individual Fingerprinting

no code implementations22 May 2018 Vikram Ravindra, Petros Drineas, Ananth Grama

Recent neuroimaging studies have shown that functional connectomes are unique to individuals, i. e., two distinct fMRIs taken over different sessions of the same subject are more similar in terms of their connectomes than those from two different subjects.

Lectures on Randomized Numerical Linear Algebra

1 code implementation24 Dec 2017 Petros Drineas, Michael W. Mahoney

This chapter is based on lectures on Randomized Numerical Linear Algebra from the 2016 Park City Mathematics Institute summer school on The Mathematics of Data.

Structural Conditions for Projection-Cost Preservation via Randomized Matrix Multiplication

no code implementations29 May 2017 Agniva Chowdhury, Jiasen Yang, Petros Drineas

Projection-cost preservation is a low-rank approximation guarantee which ensures that the cost of any rank-$k$ projection can be preserved using a smaller sketch of the original data matrix.

A Randomized Rounding Algorithm for Sparse PCA

no code implementations13 Aug 2015 Kimon Fountoulakis, Abhisek Kundu, Eugenia-Maria Kontopoulou, Petros Drineas

We present and analyze a simple, two-step algorithm to approximate the optimal solution of the sparse PCA problem.

Feature Selection for Ridge Regression with Provable Guarantees

no code implementations17 Jun 2015 Saurabh Paul, Petros Drineas

We introduce single-set spectral sparsification as a deterministic sampling based feature selection technique for regularized least squares classification, which is the classification analogue to ridge regression.

Classification feature selection +2

Approximating Sparse PCA from Incomplete Data

no code implementations NeurIPS 2015 Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail

We show that for a wide class of optimization problems, if the sketch is close (in the spectral norm) to the original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch.

Math

Recovering PCA from Hybrid-$(\ell_1,\ell_2)$ Sparse Sampling of Data Elements

no code implementations2 Mar 2015 Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail

This paper addresses how well we can recover a data matrix when only given a few of its elements.

Feature Selection for Linear SVM with Provable Guarantees

no code implementations1 Jun 2014 Saurabh Paul, Malik Magdon-Ismail, Petros Drineas

In the unsupervised setting, we also provide worst-case guarantees of the radius of the minimum enclosing ball, thereby ensuring comparable generalization as in the full feature space and resolving an open problem posed in Dasgupta et al. We present extensive experiments on real-world datasets to support our theory and to demonstrate that our method is competitive and often better than prior state-of-the-art, for which there are no known provable guarantees.

feature selection

Identifying Influential Entries in a Matrix

no code implementations14 Oct 2013 Abhisek Kundu, Srinivas Nambirajan, Petros Drineas

For any matrix A in R^(m x n) of rank \rho, we present a probability distribution over the entries of A (the element-wise leverage scores of equation (2)) that reveals the most influential entries in the matrix.

Matrix Completion

The Fast Cauchy Transform and Faster Robust Linear Regression

no code implementations19 Jul 2012 Kenneth L. Clarkson, Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, Xiangrui Meng, David P. Woodruff

We provide fast algorithms for overconstrained $\ell_p$ regression and related problems: for an $n\times d$ input matrix $A$ and vector $b\in\mathbb{R}^n$, in $O(nd\log n)$ time we reduce the problem $\min_{x\in\mathbb{R}^d} \|Ax-b\|_p$ to the same problem with input matrix $\tilde A$ of dimension $s \times d$ and corresponding $\tilde b$ of dimension $s\times 1$.

regression

Near-optimal Coresets For Least-Squares Regression

no code implementations16 Feb 2012 Christos Boutsidis, Petros Drineas, Malik Magdon-Ismail

We study (constrained) least-squares regression as well as multiple response least-squares regression and ask the question of whether a subset of the data, a coreset, suffices to compute a good approximate solution to the regression.

regression

Randomized Dimensionality Reduction for k-means Clustering

no code implementations13 Oct 2011 Christos Boutsidis, Anastasios Zouzias, Michael W. Mahoney, Petros Drineas

On the other hand, two provably accurate feature extraction methods for $k$-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD).

Clustering Dimensionality Reduction +1

Effective Resistances, Statistical Leverage, and Applications to Linear Equation Solving

1 code implementation18 May 2010 Petros Drineas, Michael W. Mahoney

Our first and main result is a simple algorithm to approximate the solution to a set of linear equations defined by a Laplacian (for a graph $G$ with $n$ nodes and $m \le n^2$ edges) constraint matrix.

Numerical Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.