no code implementations • 5 Feb 2024 • Ruihan Wu, Siddhartha Datta, Yi Su, Dheeraj Baby, Yu-Xiang Wang, Kilian Q. Weinberger
This paper addresses the prevalent issue of label shift in an online setting with missing labels, where data distributions change over time and obtaining timely labels is challenging.
no code implementations • 25 Jun 2023 • Dheeraj Baby, Aniket Das, Dheeraj Nagaraj, Praneeth Netrapalli
Our work shows that we can estimate $\mathbf{w}^{*}$ in squared norm up to an error of $\tilde{O}\left(\|\mathbf{f}^{*}\|^2 \cdot \left(\frac{1}{n} + \left(\frac{d}{n}\right)^2\right)\right)$ and prove a matching lower bound (upto log factors).
no code implementations • 18 Jun 2022 • Dheeraj Baby, Yu-Xiang Wang
We consider the problem of nonstochastic control with a sequence of quadratic losses, i. e., LQR control.
no code implementations • 4 May 2022 • Dheeraj Baby, Yu-Xiang Wang
We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses.
no code implementations • 21 Jan 2022 • Dheeraj Baby, Yu-Xiang Wang
We study the framework of universal dynamic regret minimization with strongly convex losses.
no code implementations • 22 Nov 2021 • Dheeraj Baby, Hilaf Hasson, Yuyang Wang
When the loss functions are strongly convex or exp-concave, we demonstrate that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret in terms of path variation $V_T$ of the comparator sequence.
no code implementations • 23 Apr 2021 • Dheeraj Baby, Yu-Xiang Wang
We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with exp-concave losses.
no code implementations • 23 Jan 2021 • Dheeraj Baby, Xuandong Zhao, Yu-Xiang Wang
We consider the problem of estimating a function from $n$ noisy samples whose discrete Total Variation (TV) is bounded by $C_n$.
no code implementations • NeurIPS 2020 • Dheeraj Baby, Yu-Xiang Wang
We consider the framework of non-stationary stochastic optimization [Besbes et al, 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied.
1 code implementation • NeurIPS 2019 • Dheeraj Baby, Yu-Xiang Wang
We design an $O(n\log n)$-time algorithm that achieves a cumulative square error of $\tilde{O}(n^{1/3}C_n^{2/3}\sigma^{4/3} + C_n^2)$ with high probability. We also prove a lower bound that matches the upper bound in all parameters (up to a $\log(n)$ factor).