no code implementations • 21 Feb 2024 • Yuchen Liang, Peizhong Ju, Yingbin Liang, Ness Shroff
In this paper, we establish the convergence guarantee for substantially larger classes of distributions under discrete-time diffusion models and further improve the convergence rate for distributions with bounded support.
no code implementations • 22 Jun 2023 • Yining Li, Peizhong Ju, Ness Shroff
To address this issue, we formulate a general optimization problem for determining the optimal grouping strategy, which strikes a balance between performance loss and sample/computational complexity.
no code implementations • 8 Jun 2023 • Peizhong Ju, Sen Lin, Mark S. Squillante, Yingbin Liang, Ness B. Shroff
For example, when the total number of features in the source task's learning model is fixed, we show that it is more advantageous to allocate a greater number of redundant features to the task-specific part rather than the common part.
no code implementations • 1 Jun 2023 • Peizhong Ju, Arnob Ghosh, Ness B. Shroff
Fairness plays a crucial role in various multi-agent systems (e. g., communication networks, financial markets, etc.).
no code implementations • 9 Apr 2023 • Peizhong Ju, Yingbin Liang, Ness B. Shroff
However, due to the uniqueness of meta-learning such as task-specific gradient descent inner training and the diversity/fluctuation of the ground-truth signals among training tasks, we find new and interesting properties that do not exist in single-task linear regression.
no code implementations • 12 Feb 2023 • Sen Lin, Peizhong Ju, Yingbin Liang, Ness Shroff
In particular, there is a lack of understanding on what factors are important and how they affect "catastrophic forgetting" and generalization performance.
no code implementations • 4 Jun 2022 • Peizhong Ju, Xiaojun Lin, Ness B. Shroff
Our upper bound reveals that, between the two hidden-layers, the test error descends faster with respect to the number of neurons in the second hidden-layer (the one closer to the output) than with respect to that in the first hidden-layer (the one closer to the input).
no code implementations • 9 Mar 2021 • Peizhong Ju, Xiaojun Lin, Ness B. Shroff
Specifically, for a class of learnable functions, we provide a new upper bound of the generalization error that approaches a small limiting value, even when the number of neurons $p$ approaches infinity.
1 code implementation • NeurIPS 2020 • Peizhong Ju, Xiaojun Lin, Jia Liu
Under a sparse true linear regression model with $p$ i. i. d.