no code implementations • 8 Mar 2024 • Xun Tang, Holakou Rahmanian, Michael Shavlovsky, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying
We derive the corresponding entropy regularization formulation and introduce a Sinkhorn-type algorithm for such constrained OT problems supported by theoretical guarantees.
no code implementations • 20 Jan 2024 • Xun Tang, Michael Shavlovsky, Holakou Rahmanian, Elisa Tardini, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying
To achieve possibly super-exponential convergence, we present Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine.
no code implementations • 22 Nov 2023 • Yinuo Ren, Tesi Xiao, Tanmay Gangwani, Anshuka Rangi, Holakou Rahmanian, Lexing Ying, Subhajit Sanyal
Multi-objective optimization (MOO) aims to optimize multiple, possibly conflicting objectives with widespread applications.
no code implementations • 19 Sep 2022 • Shuo Yang, Sujay Sanghavi, Holakou Rahmanian, Jan Bakus, S. V. N. Vishwanathan
Such features naturally arise in merchandised recommendation systems; for instance, "user clicked this item" as a feature is predictive of "user purchased this item" in the offline data, but is clearly not available during online serving.
no code implementations • 18 Apr 2018 • Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Holakou Rahmanian, Manfred K. Warmuth
We study the problem of online path learning with non-additive gains, which is a central problem appearing in several applications, including ensemble structured prediction.
no code implementations • NeurIPS 2017 • Holakou Rahmanian, Manfred K. Warmuth
We consider the problem of repeatedly solving a variant of the same dynamic programming problem in successive trials.
no code implementations • 15 Mar 2017 • Jie Zhu, Ying Shan, JC Mao, Dong Yu, Holakou Rahmanian, Yi Zhang
Built on top of a representative DNN model called Deep Crossing, and two forest/tree-based models including XGBoost and LightGBM, a two-step Deep Embedding Forest algorithm is demonstrated to achieve on-par or slightly better performance as compared with the DNN counterpart, with only a fraction of serving time on conventional hardware.
no code implementations • 17 Sep 2016 • Holakou Rahmanian, David P. Helmbold, S. V. N. Vishwanathan
We present applications of our framework to online learning of Huffman trees and permutations.