Search Results for author: Hiva Ghanbari

Found 4 papers, 0 papers with code

Novel and Efficient Approximations for Zero-One Loss of Linear Classifiers

no code implementations28 Feb 2019 Hiva Ghanbari, Minhan Li, Katya Scheinberg

In this work, we show that in the case of linear predictors, the expected error and the expected ranking loss can be effectively approximated by smooth functions whose closed form expressions and those of their first (and second) order derivatives depend on the first and second moments of the data distribution, which can be precomputed.

Directly and Efficiently Optimizing Prediction Error and AUC of Linear Classifiers

no code implementations7 Feb 2018 Hiva Ghanbari, Katya Scheinberg

We show that even when data is not normally distributed, computed derivatives are sufficiently useful to render an efficient optimization method and high quality solutions.

Black-Box Optimization in Machine Learning with Trust Region Based Derivative Free Algorithm

no code implementations20 Mar 2017 Hiva Ghanbari, Katya Scheinberg

In this work, we utilize a Trust Region based Derivative Free Optimization (DFO-TR) method to directly maximize the Area Under Receiver Operating Characteristic Curve (AUC), which is a nonsmooth, noisy function.

Bayesian Optimization BIG-bench Machine Learning +1

Proximal Quasi-Newton Methods for Regularized Convex Optimization with Linear and Accelerated Sublinear Convergence Rates

no code implementations11 Jul 2016 Hiva Ghanbari, Katya Scheinberg

In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established.

Cannot find the paper you are looking for? You can Submit a new open access paper.