1 code implementation • 2 Mar 2024 • Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang
However, the oversized neural networks render them impractical for deployment on resource-constrained systems, which unavoidably requires more computational time and resources during reference. To handle this, knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model.
1 code implementation • 9 Feb 2024 • Xunkai Li, Jingyuan Ma, Zhengyu Wu, Daohan Su, Wentao Zhang, Rong-Hua Li, Guoren Wang
However, (i) Most scalable GNNs tend to treat all nodes in graphs with the same propagation rules, neglecting their topological uniqueness; (ii) Existing node-wise propagation optimization strategies are insufficient on web-scale graphs with intricate topology, where a full portrayal of nodes' local properties is required.
no code implementations • 22 Jan 2024 • Xunkai Li, Zhengyu Wu, Wentao Zhang, Henan Sun, Rong-Hua Li, Guoren Wang
Then, each client conducts personalized training based on the local subgraph and the federated knowledge extractor.
1 code implementation • 22 Jan 2024 • Xunkai Li, Zhengyu Wu, Wentao Zhang, Yinlin Zhu, Rong-Hua Li, Guoren Wang
Existing FGL studies fall into two categories: (i) FGL Optimization, which improves multi-client training in existing machine learning models; (ii) FGL Model, which enhances performance with complex local models and multi-client interactions.
1 code implementation • 22 Jan 2024 • Xunkai Li, Meihao Liao, Zhengyu Wu, Daohan Su, Wentao Zhang, Rong-Hua Li, Guoren Wang
Most existing graph neural networks (GNNs) are limited to undirected graphs, whose restricted scope of the captured relational information hinders their expressive capabilities and deployments in real-world scenarios.
1 code implementation • 22 Jan 2024 • Xunkai Li, Yulin Zhao, Zhengyu Wu, Wentao Zhang, Rong-Hua Li, Guoren Wang
With the rapid advancement of AI applications, the growing needs for data privacy and model robustness have highlighted the importance of machine unlearning, especially in thriving graph-based scenarios.
no code implementations • 7 Dec 2023 • Henan Sun, Xunkai Li, Zhengyu Wu, Daohan Su, Rong-Hua Li, Guoren Wang
Despite numerous attempts, most existing GNNs struggle to achieve optimal node representations due to the constraints of undirected graphs.
Ranked #28 on Node Classification on Cornell
no code implementations • 18 Oct 2023 • Shiye Wang, Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e. g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive.
no code implementations • 17 Oct 2023 • Zening Li, Rong-Hua Li, Meihao Liao, Fusheng Jin, Guoren Wang
We propose LDP-GE, a novel privacy-preserving graph embedding framework, to protect the privacy of node data.
no code implementations • 20 Jul 2023 • Rongqing Li, Jiaqi Yu, Changsheng Li, Wenhan Luo, Ye Yuan, Guoren Wang
There is a crucial limitation: these works assume the dataset used for training the target model to be known beforehand and leverage this dataset for model attribute attack.
no code implementations • 2 Jul 2023 • Kaituo Feng, Yikun Miao, Changsheng Li, Ye Yuan, Guoren Wang
Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN.
no code implementations • 4 Dec 2022 • Yaxin Luopan, Rui Han, Qinglong Zhang, Chi Harold Liu, Guoren Wang
Upon training for a new task, the gradient integrator ensures the prevention of catastrophic forgetting and mitigation of negative knowledge transfer by effectively combining signature tasks identified from the past local tasks and other clients' current tasks through the global model.
1 code implementation • 22 Jul 2022 • Hanjie Li, Changsheng Li, Kaituo Feng, Ye Yuan, Guoren Wang, Hongyuan Zha
By this means, we can adaptively propagate knowledge to other nodes for learning robust node embedding representations.
1 code implementation • 28 Jun 2022 • Yanjiang Yu, Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Ye Yuan, Guoren Wang
To this end, we propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space, which can directly contribute to the restoration quality.
no code implementations • 14 Jun 2022 • Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang
Knowledge distillation (KD) has demonstrated its effectiveness to boost the performance of graph neural networks (GNNs), where its goal is to distill knowledge from a deeper teacher GNN into a shallower student GNN.
2 code implementations • 8 Jun 2022 • Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Guoren Wang
To address this problem, we first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512).
no code implementations • 26 Apr 2022 • Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, Guoren Wang
Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views.
1 code implementation • 19 Apr 2022 • Binhui Xie, Shuang Li, Mingjia Li, Chi Harold Liu, Gao Huang, Guoren Wang
Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the supervised model trained on a labeled source domain.
Ranked #4 on Semantic Segmentation on GTAV-to-Cityscapes Labels
no code implementations • 18 Dec 2021 • Rui Han, Qinglong Zhang, Chi Harold Liu, Guoren Wang, Jian Tang, Lydia Y. Chen
The prior art sheds light on exploring the accuracy-resource tradeoff by scaling the model sizes in accordance to resource dynamics.
1 code implementation • NeurIPS 2021 • Fangrui Lv, Jian Liang, Kaixiong Gong, Shuang Li, Chi Harold Liu, Han Li, Di Liu, Guoren Wang
Domain adaptation (DA) attempts to transfer the knowledge from a labeled source domain to an unlabeled target domain that follows different distribution from the source.
1 code implementation • 2 Dec 2021 • Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, Xinjing Cheng, Guoren Wang
Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains.
no code implementations • 8 Nov 2021 • Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang
To make the learnt graph structure more stable and effective, we take into account $k$-nearest neighbor graph as a priori, and learn a relation propagation graph structure.
no code implementations • 28 Oct 2021 • Yanming Li, Changsheng Li, Shiye Wang, Ye Yuan, Guoren Wang
In this paper, we propose a new deep subspace clustering framework, motivated by the energy-based models.
1 code implementation • 26 Oct 2021 • Zhenyu Lu, Yurong Cheng, Mingjun Zhong, George Stoian, Ye Yuan, Guoren Wang
A typical approach is to formulate causal inference as a supervised learning problem and so counterfactual could be predicted.
1 code implementation • 11 May 2021 • Shuang Li, Binhui Xie, Bin Zang, Chi Harold Liu, Xinjing Cheng, Ruigang Yang, Guoren Wang
Specifically, we first design a pixel-wise contrastive loss by considering the correspondences between semantic distributions and pixel-wise representations from both domains.
1 code implementation • 23 Mar 2021 • Shuang Li, Binhui Xie, Qiuxia Lin, Chi Harold Liu, Gao Huang, Guoren Wang
Domain Adaptation (DA) attempts to transfer knowledge learned in the labeled source domain to the unlabeled but related target domain without requiring large amounts of target supervision.
no code implementations • 28 Jul 2020 • Changsheng Li, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, Guoren Wang
Unsupervised active learning has attracted increasing attention in recent years, where its goal is to select representative samples in an unsupervised setting for human annotating.
1 code implementation • ICDE 2020 • Chi Harold Liu, Yinuo Zhao, Zipeng Dai, Ye Yuan, Guoren Wang, Dapeng Wu, Kin K. Leung
Spatial crowdsourcing (SC) utilizes the potential of a crowd to accomplish certain location based tasks.
1 code implementation • 5 Dec 2019 • Yuni Lai, Linfeng Zhang, Donghong Han, Rui Zhou, Guoren Wang
In addition, a pooling method based on percentile is proposed to improve the accuracy of the model.
no code implementations • 18 Feb 2010 • Guoren Wang, Bin Wang, Xiaochun Yang, IEEE Computer Society, and Ge Yu, Member, IEEE
Abstract—The graph structure is a very important means to model schemaless data with complicated structures, such as protein- protein interaction networks, chemical compounds, knowledge query inferring systems, and road networks.