no code implementations • 15 Dec 2023 • Nan Yin, Mengzhu Wang, Zhenghan Chen, Giulia De Masi, Bin Gu, Huan Xiong
Current work often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary features instead of continuous ones for efficient training, which would overlooks graph structure information and leads to the loss of details during propagation.
no code implementations • 10 Dec 2023 • Houcheng Su, Daixian Liu, Mengzhu Wang, Wei Wang
Recent domain adaptation study has shown that maximizing the sum of singular values of prediction results can simultaneously enhance their confidence (discriminability) and diversity.
no code implementations • 8 Jun 2023 • Nan Yin, Li Shen, Mengzhu Wang, Long Lan, Zeyu Ma, Chong Chen, Xian-Sheng Hua, Xiao Luo
Although graph neural networks (GNNs) have achieved impressive achievements in graph classification, they often need abundant task-specific labels, which could be extensively costly to acquire.
no code implementations • 2 Aug 2022 • Mengzhu Wang, Jianlong Yuan, Qi Qian, Zhibin Wang, Hao Li
Further, we provide an in-depth analysis of the mechanism and rational behind our approach, which gives us a better understanding of why leverage logits in lieu of features can help domain generalization.
no code implementations • 12 Apr 2022 • Wenju Zhang, Xiang Zhang, Qing Liao, Long Lan, Mengzhu Wang, Wei Wang, Baoyun Peng, Zhengming Ding
Nuclear norm maximization has shown the power to enhance the transferability of unsupervised domain adaptation model (UDA) in an empirical scheme.
no code implementations • 28 Dec 2020 • Mengzhu Wang, Xiang Zhang, Long Lan, Wei Wang, Huibin Tan, Zhigang Luo
In this paper, we emphasize the significance of reducing feature redundancy for improving UDA in a bi-level way.