no code implementations • 26 Sep 2023 • Yawei Zhao
Stochastic optimization methods such as mirror descent have wide applications due to low computational cost.
1 code implementation • 26 Jun 2023 • Yawei Zhao, Qinghe Liu, Xinwang Liu, Kunlun He
Comparing with 13 existing related methods, the proposed method successfully achieves the best model performance, and meanwhile up to 60% improvement of communication efficiency.
no code implementations • 15 Feb 2023 • Meng Liu, Ke Liang, Yawei Zhao, Wenxuan Tu, Sihang Zhou, Xinbiao Gan, Xinwang Liu, Kunlun He
To address this issue, we propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information to learn more informative node representations.
no code implementations • 28 Nov 2019 • Yawei Zhao, Qian Zhao, Xingxing Zhang, En Zhu, Xinwang Liu, Jianping Yin
We provide a new theoretical analysis framework, which shows an interesting observation, that is, the relation between the switching cost and the dynamic regret is different for settings of OA and OCO.
no code implementations • 4 Aug 2019 • Yawei Zhao, En Zhu, Xinwang Liu, Chang Tang, Deke Guo, Jianping Yin
Specifically, we propose a new variant of the alternating direction method of multipliers (ADMM) to solve this problem efficiently.
no code implementations • 29 Jan 2019 • Yawei Zhao, Chen Yu, Peilin Zhao, Hanlin Tang, Shuang Qiu, Ji Liu
Decentralized Online Learning (online learning in decentralized networks) attracts more and more attention, since it is believed that Decentralized Online Learning can help the data providers cooperatively better solve their online problems without sharing their private data to a third party or other providers.
no code implementations • 26 Dec 2018 • Yawei Zhao, En Zhu, Xinwang Liu, Jianping Yin
We provide a new theoretical analysis framework to investigate online gradient descent in the dynamic environment.
no code implementations • 8 Oct 2018 • Yawei Zhao, Shuang Qiu, Ji Liu
While the online gradient method has been shown to be optimal for the static regret metric, the optimal algorithm for the dynamic regret remains unknown.
no code implementations • 20 Aug 2018 • Yawei Zhao, Kai Xu, Xinwang Liu, En Zhu, Xinzhong Zhu, Jianping Yin
The reason is that it finds the similar instances according to their features directly, which is usually impacted by the imperfect data, and thus returns sub-optimal results.