Search Results for author: Shiyin Lu

Found 8 papers, 1 papers with code

Non-stationary Projection-free Online Learning with Dynamic and Adaptive Regret Guarantees

no code implementations19 May 2023 Yibo Wang, Wenhao Yang, Wei Jiang, Shiyin Lu, Bing Wang, Haihong Tang, Yuanyu Wan, Lijun Zhang

Specifically, we first provide a novel dynamic regret analysis for an existing projection-free method named $\text{BOGD}_\text{IP}$, and establish an $\mathcal{O}(T^{3/4}(1+P_T))$ dynamic regret bound, where $P_T$ denotes the path-length of the comparator sequence.

Revisiting Smoothed Online Learning

no code implementations NeurIPS 2021 Lijun Zhang, Wei Jiang, Shiyin Lu, Tianbao Yang

Moreover, when the hitting cost is both convex and $\lambda$-quadratic growth, we reduce the competitive ratio to $1 + \frac{2}{\sqrt{\lambda}}$ by minimizing the weighted sum of the hitting cost and the switching cost.

Minimizing Dynamic Regret and Adaptive Regret Simultaneously

no code implementations6 Feb 2020 Lijun Zhang, Shiyin Lu, Tianbao Yang

To address this limitation, new performance measures, including dynamic regret and adaptive regret have been proposed to guide the design of online algorithms.

Adaptive and Efficient Algorithms for Tracking the Best Expert

no code implementations5 Sep 2019 Shiyin Lu, Lijun Zhang

The first algorithm achieves a second-order tracking regret bound, which improves existing first-order bounds.

Multi-Objective Generalized Linear Bandits

no code implementations30 May 2019 Shiyin Lu, Guanghui Wang, Yao Hu, Lijun Zhang

In this paper, we study the multi-objective bandits (MOB) problem, where a learner repeatedly selects one arm to play and then receives a reward vector consisting of multiple objectives.

Multi-Armed Bandits

Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization

no code implementations15 May 2019 Guanghui Wang, Shiyin Lu, Lijun Zhang

In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions.

SAdam: A Variant of Adam for Strongly Convex Functions

1 code implementation ICLR 2020 Guanghui Wang, Shiyin Lu, Wei-Wei Tu, Lijun Zhang

In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependant $O(\log T)$ regret bound for strongly convex functions.

Adaptive Online Learning in Dynamic Environments

no code implementations NeurIPS 2018 Lijun Zhang, Shiyin Lu, Zhi-Hua Zhou

In this paper, we study online convex optimization in dynamic environments, and aim to bound the dynamic regret with respect to any sequence of comparators.

Cannot find the paper you are looking for? You can Submit a new open access paper.