no code implementations • 21 Jul 2023 • Qizhang Li, Yiwen Guo, Xiaochen Yang, WangMeng Zuo, Hao Chen
Our ICLR work advocated for enhancing transferability in adversarial examples by incorporating a Bayesian formulation into model parameters, which effectively emulates the ensemble of infinitely many deep neural networks, while, in this paper, we introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.
no code implementations • 27 Sep 2021 • Chenyu Wang, Zongyu Lin, Xiaochen Yang, Jiao Sun, Mingxuan Yue, Cyrus Shahabi
Based on the homophily assumption of GNN, we propose a homophily-aware constraint to regularize the optimization of the region graph so that neighboring region nodes on the learned graph share similar crime patterns, thus fitting the mechanism of diffusion convolution.
no code implementations • 17 May 2021 • Xiaoxu Li, Xiaochen Yang, Zhanyu Ma, Jing-Hao Xue
Few-shot image classification is a challenging problem that aims to achieve the human level of recognition based only on a small number of training images.
1 code implementation • NeurIPS 2020 • Mingzhi Dong, Xiaochen Yang, Rui Zhu, Yujiang Wang, Jing-Hao Xue
Metric learning aims to learn a distance measure that can benefit distance-based methods such as the nearest neighbour (NN) classifier.
no code implementations • 1 Jul 2020 • Xiaochen Yang, Jean Honorio
In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools.
1 code implementation • 27 Jun 2020 • Xiaoxu Li, Liyun Yu, Xiaochen Yang, Zhanyu Ma, Jing-Hao Xue, Jie Cao, Jun Guo
Despite achieving state-of-the-art performance, deep learning methods generally require a large amount of labeled data during training and may suffer from overfitting when the sample size is small.
1 code implementation • 10 Jun 2020 • Xiaochen Yang, Yiwen Guo, Mingzhi Dong, Jing-Hao Xue
Many existing methods consider maximizing or at least constraining a distance margin in the feature space that separates similar and dissimilar pairs of instances to guarantee their generalization ability.
no code implementations • 9 Feb 2018 • Mingzhi Dong, Xiaochen Yang, Yang Wu, Jing-Hao Xue
In this paper, we propose the Lipschitz margin ratio and a new metric learning framework for classification through maximizing the ratio.
no code implementations • 9 Feb 2018 • Mingzhi Dong, Yujiang Wang, Xiaochen Yang, Jing-Hao Xue
The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data.