Search Results for author: Robert M. Freund

Found 6 papers, 1 papers with code

Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

no code implementations30 Aug 2022 Zikai Xiong, Robert M. Freund

The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection.

Binary Classification

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

no code implementations20 Oct 2018 Robert M. Freund, Paul Grigas, Rahul Mazumder

When the training data is non-separable, we show that the degree of non-separability naturally enters the analysis and informs the properties and convergence guarantees of two standard first-order methods: steepest descent (for any given norm) and stochastic gradient descent.

Binary Classification General Classification +1

An Extended Frank-Wolfe Method with "In-Face" Directions, and its Application to Low-Rank Matrix Completion

no code implementations6 Nov 2015 Robert M. Freund, Paul Grigas, Rahul Mazumder

Motivated principally by the low-rank matrix completion problem, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region.

Low-Rank Matrix Completion

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

no code implementations16 May 2015 Robert M. Freund, Paul Grigas, Rahul Mazumder

Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function.

regression

AdaBoost and Forward Stagewise Regression are First-Order Convex Optimization Methods

no code implementations4 Jul 2013 Robert M. Freund, Paul Grigas, Rahul Mazumder

Boosting methods are highly popular and effective supervised learning methods which combine weak learners into a single accurate model with good statistical performance.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.