Search Results for author: Paul Grigas

Found 14 papers, 2 papers with code

Binary Classification with Instance and Label Dependent Label Noise

no code implementations6 Jun 2023 Hyungki Im, Paul Grigas

Our findings suggest that learning solely with noisy samples is impossible without access to clean samples or strong assumptions on the distribution of the data.

Binary Classification Classification

Online Contextual Decision-Making with a Smart Predict-then-Optimize Method

no code implementations15 Jun 2022 Heyuan Liu, Paul Grigas

We propose an algorithm that mixes a prediction step based on the "Smart Predict-then-Optimize (SPO)" method with a dual update step based on mirror descent.

Decision Making

Integrated Conditional Estimation-Optimization

no code implementations24 Oct 2021 Meng Qi, Paul Grigas, Zuo-Jun Max Shen

In contrast to the standard approach of first estimating the distribution of uncertain parameters and then optimizing the objective based on the estimation, we propose an integrated conditional estimation-optimization (ICEO) framework that estimates the underlying conditional distribution of the random parameter while considering the structure of the optimization problem.

Generalization Bounds

Risk Bounds and Calibration for a Smart Predict-then-Optimize Method

no code implementations NeurIPS 2021 Heyuan Liu, Paul Grigas

We develop risk bounds and uniform calibration results for the SPO+ loss relative to the SPO loss, which provide a quantitative way to transfer the excess surrogate risk to excess true risk.

Decision Making Generalization Bounds +1

Joint Online Learning and Decision-making via Dual Mirror Descent

no code implementations20 Apr 2021 Alfonso Lobos, Paul Grigas, Zheng Wen

We consider an online revenue maximization problem over a finite time horizon subject to lower and upper bounds on cost.

Decision Making

Stochastic In-Face Frank-Wolfe Methods for Non-Convex Optimization and Sparse Neural Network Training

1 code implementation9 Jun 2019 Paul Grigas, Alfonso Lobos, Nathan Vermeersch

The Frank-Wolfe method and its extensions are well-suited for delivering solutions with desirable structural properties, such as sparsity or low-rank structure.

Generalization Bounds in the Predict-then-Optimize Framework

no code implementations NeurIPS 2019 Othman El Balghiti, Adam N. Elmachtoub, Paul Grigas, Ambuj Tewari

A natural loss function in this environment is to consider the cost of the decisions induced by the predicted parameters, in contrast to the prediction error of the parameters.

Generalization Bounds

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

no code implementations20 Oct 2018 Robert M. Freund, Paul Grigas, Rahul Mazumder

When the training data is non-separable, we show that the degree of non-separability naturally enters the analysis and informs the properties and convergence guarantees of two standard first-order methods: steepest descent (for any given norm) and stochastic gradient descent.

Binary Classification General Classification +1

Smart "Predict, then Optimize"

1 code implementation22 Oct 2017 Adam N. Elmachtoub, Paul Grigas

Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective.

Portfolio Optimization

Profit Maximization for Online Advertising Demand-Side Platforms

no code implementations6 Jun 2017 Paul Grigas, Alfonso Lobos, Zheng Wen, Kuang-Chih Lee

We develop an optimization model and corresponding algorithm for the management of a demand-side platform (DSP), whereby the DSP aims to maximize its own profit while acquiring valuable impressions for its advertiser clients.

Optimization and Control Computer Science and Game Theory

An Extended Frank-Wolfe Method with "In-Face" Directions, and its Application to Low-Rank Matrix Completion

no code implementations6 Nov 2015 Robert M. Freund, Paul Grigas, Rahul Mazumder

Motivated principally by the low-rank matrix completion problem, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region.

Low-Rank Matrix Completion

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

no code implementations16 May 2015 Robert M. Freund, Paul Grigas, Rahul Mazumder

Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function.

regression

AdaBoost and Forward Stagewise Regression are First-Order Convex Optimization Methods

no code implementations4 Jul 2013 Robert M. Freund, Paul Grigas, Rahul Mazumder

Boosting methods are highly popular and effective supervised learning methods which combine weak learners into a single accurate model with good statistical performance.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.