no code implementations • 8 Mar 2024 • Naman Agarwal, Pranjal Awasthi, Satyen Kale, Eric Zhao
Stacking, a heuristic technique for training deep residual networks by progressively increasing the number of layers and initializing new layers by copying parameters from older layers, has proven quite successful in improving the efficiency of training deep neural networks.
no code implementations • 22 Jul 2023 • Pranjal Awasthi, Nika Haghtalab, Eric Zhao
Multi-distribution learning is a natural generalization of PAC learning to settings with multiple data distributions.
1 code implementation • 22 Oct 2022 • Nika Haghtalab, Michael I. Jordan, Eric Zhao
This improves upon the best known sample complexity bounds for fair federated learning by Mohri et al. and collaborative learning by Nguyen and Zakynthinou by multiplicative factors of $n$ and $\log(n)/\epsilon^3$, respectively.
no code implementations • 29 Sep 2021 • Eric Zhao, De-An Huang, Hao liu, Zhiding Yu, Anqi Liu, Olga Russakovsky, Anima Anandkumar
In real-world applications, however, there are multiple protected attributes yielding a large number of intersectional protected groups.
1 code implementation • 10 Jun 2021 • Eric Zhao, Alexander R. Trott, Caiming Xiong, Stephan Zheng
We study the problem of training a principal in a multi-agent general-sum game using reinforcement learning (RL).
no code implementations • 26 Apr 2021 • Yunjiang Jiang, Yue Shang, Rui Li, Wen-Yun Yang, Guoyu Tang, Chaoyi Ma, Yun Xiao, Eric Zhao
We describe a highly-scalable feed-forward neural model to provide relevance score for (query, item) pairs, using only user query and item title as features, and both user click feedback as well as limited human ratings as labels.
no code implementations • 1 Jan 2021 • Eric Zhao, Alexander R Trott, Caiming Xiong, Stephan Zheng
Policies for real-world multi-agent problems, such as optimal taxation, can be learned in multi-agent simulations with AI agents that emulate humans.
no code implementations • 16 Jul 2020 • Eric Zhao, Anqi Liu, Animashree Anandkumar, Yisong Yue
We address the problem of active learning under label shift: when the class proportions of source and target domains differ.
8 code implementations • 19 Feb 2019 • Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, Dawei Yin
These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key.
Ranked #3 on Recommendation Systems on Epinions (using extra training data)
2 code implementations • 24 Oct 2018 • Yao Ma, Ziyi Guo, Zhaochun Ren, Eric Zhao, Jiliang Tang, Dawei Yin
Current graph neural network models cannot utilize the dynamic information in dynamic graphs.