Search Results for author: Katya Scheinberg

Found 26 papers, 5 papers with code

Finding Optimal Policy for Queueing Models: New Parameterization

no code implementations21 Jun 2022 Trang H. Tran, Lam M. Nguyen, Katya Scheinberg

In this work, we investigate the optimization aspects of the queueing model as a RL environment and provide insight to learn the optimal policy efficiently.

Navigate reinforcement-learning +1

Nesterov Accelerated Shuffling Gradient Method for Convex Optimization

1 code implementation7 Feb 2022 Trang H. Tran, Katya Scheinberg, Lam M. Nguyen

This rate is better than that of any other shuffling gradient methods in convex regime.

Feature Engineering and Forecasting via Derivative-free Optimization and Ensemble of Sequence-to-sequence Networks with Applications in Renewable Energy

1 code implementation12 Sep 2019 Mohammad Pirhooshyaran, Katya Scheinberg, Lawrence V. Snyder

This study introduces a framework for the forecasting, reconstruction and feature engineering of multivariate processes along with its renewable energy applications.

Feature Engineering feature selection

Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization

no code implementations29 May 2019 Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, Katya Scheinberg

We then demonstrate via rigorous analysis of the variance and by numerical comparisons on reinforcement learning tasks that the Gaussian sampling method used in [Salimans et al. 2016] is significantly inferior to the orthogonal sampling used in [Choromaski et al. 2018] as well as more general interpolation methods.

reinforcement-learning Reinforcement Learning (RL)

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization

no code implementations3 May 2019 Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, Katya Scheinberg

To this end, we use the results in [Berahas et al., 2019] and show how each method can satisfy the sufficient conditions, possibly only with some sufficiently large probability at each iteration, as happens to be the case with Gaussian smoothing and smoothing on a sphere.

Optimization and Control

Novel and Efficient Approximations for Zero-One Loss of Linear Classifiers

no code implementations28 Feb 2019 Hiva Ghanbari, Minhan Li, Katya Scheinberg

In this work, we show that in the case of linear predictors, the expected error and the expected ranking loss can be effectively approximated by smooth functions whose closed form expressions and those of their first (and second) order derivatives depend on the first and second moments of the data distribution, which can be precomputed.

Inexact SARAH Algorithm for Stochastic Optimization

no code implementations25 Nov 2018 Lam M. Nguyen, Katya Scheinberg, Martin Takáč

We develop and analyze a variant of the SARAH algorithm, which does not require computation of the exact gradient.

Stochastic Optimization

New Convergence Aspects of Stochastic Gradient Algorithms

no code implementations10 Nov 2018 Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk

We show the convergence of SGD for strongly convex objective function without using bounded gradient assumption when $\{\eta_t\}$ is a diminishing sequence and $\sum_{t=0}^\infty \eta_t \rightarrow \infty$.

An Empirical Analysis of Constrained Support Vector Quantile Regression for Nonparametric Probabilistic Forecasting of Wind Power

no code implementations29 Mar 2018 Kostas Hatalis, Shalinee Kishore, Katya Scheinberg, Alberto Lamadrid

Uncertainty analysis in the form of probabilistic forecasting can provide significant improvements in decision-making processes in the smart power grid for better integrating renewable energies such as wind.

Decision Making Prediction Intervals +1

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

no code implementations ICML 2018 Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takáč

In (Bottou et al., 2016), a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm.

BIG-bench Machine Learning

Directly and Efficiently Optimizing Prediction Error and AUC of Linear Classifiers

no code implementations7 Feb 2018 Hiva Ghanbari, Katya Scheinberg

We show that even when data is not normally distributed, computed derivatives are sufficiently useful to render an efficient optimization method and high quality solutions.

When Does Stochastic Gradient Algorithm Work Well?

no code implementations18 Jan 2018 Lam M. Nguyen, Nam H. Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Katya Scheinberg

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification.

General Classification regression +1

A Stochastic Trust Region Algorithm Based on Careful Step Normalization

no code implementations29 Dec 2017 Frank E. Curtis, Katya Scheinberg, Rui Shi

An algorithm is proposed for solving stochastic and finite sum minimization problems.

Smooth Pinball Neural Network for Probabilistic Forecasting of Wind Power

1 code implementation4 Oct 2017 Kostas Hatalis, Alberto J. Lamadrid, Katya Scheinberg, Shalinee Kishore

Multiple quantiles are estimated to form 10%, to 90% prediction intervals which are evaluated using a quantile score and reliability measures.

Decision Making Prediction Intervals +1

Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning

1 code implementation30 Jun 2017 Frank E. Curtis, Katya Scheinberg

We then discuss some of the distinctive features of these optimization problems, focusing on the examples of logistic regression and the training of deep neural networks.

BIG-bench Machine Learning regression +1

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

no code implementations20 May 2017 Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč

In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses.

Black-Box Optimization in Machine Learning with Trust Region Based Derivative Free Algorithm

no code implementations20 Mar 2017 Hiva Ghanbari, Katya Scheinberg

In this work, we utilize a Trust Region based Derivative Free Optimization (DFO-TR) method to directly maximize the Area Under Receiver Operating Characteristic Curve (AUC), which is a nonsmooth, noisy function.

Bayesian Optimization BIG-bench Machine Learning +1

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

no code implementations ICML 2017 Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems.

BIG-bench Machine Learning

Optimal Generalized Decision Trees via Integer Programming

no code implementations10 Dec 2016 Oktay Gunluk, Jayant Kalagnanam, Minhan Li, Matt Menickelly, Katya Scheinberg

Decision trees have been a very popular class of predictive models for decades due to their interpretability and good performance on categorical features.

Proximal Quasi-Newton Methods for Regularized Convex Optimization with Linear and Accelerated Sublinear Convergence Rates

no code implementations11 Jul 2016 Hiva Ghanbari, Katya Scheinberg

In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established.

Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis

1 code implementation26 Nov 2013 Katya Scheinberg, Xiaocheng Tang

Recently several methods were proposed for sparse optimization which make careful use of second-order information [10, 28, 16, 3] to improve local convergence rates.

Efficiently Using Second Order Information in Large l1 Regularization Problems

no code implementations27 Mar 2013 Xiaocheng Tang, Katya Scheinberg

We propose a novel general algorithm LHAC that efficiently uses second-order information to train a class of large-scale l1-regularized problems.

regression

Fast Alternating Linearization Methods for Minimizing the Sum of Two Convex Functions

no code implementations23 Dec 2009 Donald Goldfarb, Shiqian Ma, Katya Scheinberg

We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.