Search Results for author: Dheeraj Baby

Found 10 papers, 1 papers with code

Online Feature Updates Improve Online (Generalized) Label Shift Adaptation

no code implementations5 Feb 2024 Ruihan Wu, Siddhartha Datta, Yi Su, Dheeraj Baby, Yu-Xiang Wang, Kilian Q. Weinberger

This paper addresses the prevalent issue of label shift in an online setting with missing labels, where data distributions change over time and obtaining timely labels is challenging.

Missing Labels Self-Supervised Learning

Near Optimal Heteroscedastic Regression with Symbiotic Learning

no code implementations25 Jun 2023 Dheeraj Baby, Aniket Das, Dheeraj Nagaraj, Praneeth Netrapalli

Our work shows that we can estimate $\mathbf{w}^{*}$ in squared norm up to an error of $\tilde{O}\left(\|\mathbf{f}^{*}\|^2 \cdot \left(\frac{1}{n} + \left(\frac{d}{n}\right)^2\right)\right)$ and prove a matching lower bound (upto log factors).

Econometrics regression +2

Optimal Dynamic Regret in LQR Control

no code implementations18 Jun 2022 Dheeraj Baby, Yu-Xiang Wang

We consider the problem of nonstochastic control with a sequence of quadratic losses, i. e., LQR control.

Second Order Path Variationals in Non-Stationary Online Learning

no code implementations4 May 2022 Dheeraj Baby, Yu-Xiang Wang

We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses.

Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond

no code implementations21 Jan 2022 Dheeraj Baby, Yu-Xiang Wang

We study the framework of universal dynamic regret minimization with strongly convex losses.

Dynamic Regret for Strongly Adaptive Methods and Optimality of Online KRR

no code implementations22 Nov 2021 Dheeraj Baby, Hilaf Hasson, Yuyang Wang

When the loss functions are strongly convex or exp-concave, we demonstrate that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret in terms of path variation $V_T$ of the comparator sequence.

Open-Ended Question Answering regression

Optimal Dynamic Regret in Exp-Concave Online Learning

no code implementations23 Apr 2021 Dheeraj Baby, Yu-Xiang Wang

We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with exp-concave losses.

An Optimal Reduction of TV-Denoising to Adaptive Online Learning

no code implementations23 Jan 2021 Dheeraj Baby, Xuandong Zhao, Yu-Xiang Wang

We consider the problem of estimating a function from $n$ noisy samples whose discrete Total Variation (TV) is bounded by $C_n$.

Denoising Time Series +1

Adaptive Online Estimation of Piecewise Polynomial Trends

no code implementations NeurIPS 2020 Dheeraj Baby, Yu-Xiang Wang

We consider the framework of non-stationary stochastic optimization [Besbes et al, 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied.

2k regression +1

Online Forecasting of Total-Variation-bounded Sequences

1 code implementation NeurIPS 2019 Dheeraj Baby, Yu-Xiang Wang

We design an $O(n\log n)$-time algorithm that achieves a cumulative square error of $\tilde{O}(n^{1/3}C_n^{2/3}\sigma^{4/3} + C_n^2)$ with high probability. We also prove a lower bound that matches the upper bound in all parameters (up to a $\log(n)$ factor).

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.