2 code implementations • TPAMI 2023 • Zhiyong Yang, Qianqian Xu, Wenzheng Hou, Shilong Bao, Yuan He, Xiaochun Cao, Qingming Huang
On top of this, we can show that: 1) Under mild conditions, AdAUC can be optimized equivalently with score-based or instance-wise-loss-based perturbations, which is compatible with most of the popular adversarial example generation methods.
1 code implementation • TPAMI 2023 • Zhiyong Yang, Qianqian Xu, Shilong Bao, Peisong Wen, Xiaochun Cao, Qingming Huang
We propose a new result that not only addresses the interdependency issue but also brings a much sharper bound with weaker assumptions about the loss function.
2 code implementations • NeurIPS 2022 • Huiyang Shao, Qianqian Xu, Zhiyong Yang, Shilong Bao, Qingming Huang
sample size and a slow convergence rate, especially for TPAUC.
1 code implementation • NeurIPS 2023 • Shilong Bao, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, Qingming Huang
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems (RS), closing the gap between metric learning and Collaborative Filtering.
no code implementations • ICML 2022 • Wenzheng Hou, Qianqian Xu, Zhiyong Yang, Shilong Bao, Yuan He, Qingming Huang
Our analysis differs from the existing studies since the algorithm is asked to generate adversarial examples by calculating the gradient of a min-max problem.
1 code implementation • TPAMI 2022 • Zhiyong Yang, Qianqian Xu, Shilong Bao, Yuan He, Xiaochun Cao, Qingming Huang
The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss.
no code implementations • TPAMI 2022 • Shilong Bao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, Qingming Huang
However, in this work, by taking a theoretical analysis, we find that negative sampling would lead to a biased estimation of the generalization error.
no code implementations • TPAMI 2021 • Zhiyong Yang, Qianqian Xu, Shilong Bao, Xiaochun Cao, Qingming Huang
Our foundation is based on the M metric, which is a well-known multiclass extension of AUC.
1 code implementation • ICML 2021 • Zhiyong Yang, Qianqian Xu, Shilong Bao, Yuan He, Xiaochun Cao, Qingming Huang
The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss.
1 code implementation • ACM MM 2019 • Shilong Bao, Qianqian Xu, Ke Ma, Zhiyong Yang, Xiaochun Cao, Qingming Huang
From the margin theory point-of-view, we then propose a generalization enhancement scheme for sparse and insufficient labels via optimizing the margin distribution.