1 code implementation • 6 Apr 2022 • Hao Jin, Yang Peng, Wenhao Yang, Shusen Wang, Zhihua Zhang
We study a Federated Reinforcement Learning (FedRL) problem in which $n$ agents collaboratively learn a single policy without sharing the trajectories they collected during agent-environment interaction.
no code implementations • 12 Apr 2021 • Guangzeng Xie, Hao Jin, Dachao Lin, Zhihua Zhang
We propose \textit{Meta-Regularization}, a novel approach for the adaptive choice of the learning rate in first-order gradient descent methods.
no code implementations • 26 Jan 2021 • Hao Jin, Alessandro Narduzzo, Minoru Nohara, Hidenori Takagi, Nigel Hussey, Kamran Behnia
We present a study of the thermoelectric (Seebeck and Nernst) response in heavily overdoped, non-superconducting La$_{1. 67}$Sr$_{0. 33}$CuO$_4$.
Superconductivity Materials Science Strongly Correlated Electrons
1 code implementation • 15 Sep 2019 • Xiaosen Wang, Hao Jin, Yichen Yang, Kun He
In the area of natural language processing, deep learning models are recently known to be vulnerable to various types of adversarial perturbations, but relatively few works are done on the defense side.
no code implementations • 18 Aug 2019 • Hao Jin, Dachao Lin, Zhihua Zhang
Stochastic variance-reduced gradient (SVRG) is a classical optimization method.
no code implementations • ICLR 2019 • Guangzeng Xie, Hao Jin, Dachao Lin, Zhihua Zhang
Specifically, we impose a regularization term on the learning rate via a generalized distance, and cast the joint updating process of the parameter and the learning rate into a maxmin problem.
1 code implementation • 4 Jan 2019 • Deli Gong, Muoi Tran, Shweta Shinde, Hao Jin, Vyas Sekar, Prateek Saxena, Min Suk Kang
In this paper, we show the technical feasibility of verifiable in-network filtering, called VIF, that offers filtering verifiability to DDoS victims and neighbor ASes.
Cryptography and Security