no code implementations • 19 Mar 2024 • Cheng Yang, Jixi Liu, Yunhe Yan, Chuan Shi
The F3 are expected to statistically neutralize the sensitive bias in node representations and provide additional nonsensitive information.
1 code implementation • 14 Mar 2024 • Sun Ao, Weilin Zhao, Xu Han, Cheng Yang, Zhiyuan Liu, Chuan Shi, Maosong Sun, Shengnan Wang, Teng Su
Effective attention modules have played a crucial role in the success of Transformer-based large language models (LLMs), but the quadratic time and memory complexities of these attention modules also pose a challenge when processing long sequences.
1 code implementation • NeurIPS 2023 • Donglin Xia, Xiao Wang, Nian Liu, Chuan Shi
To address this challenge, we propose the Cluster Information Transfer (CIT) mechanism (Code available at https://github. com/BUPT-GAMMA/CITGNN), which can learn invariant representations for GNNs, thereby improving their generalization ability to various and unknown test graphs with structure shift.
no code implementations • 5 Mar 2024 • Mengmei Zhang, Xiao Wang, Chuan Shi, Lingjuan Lyu, Tianchi Yang, Junping Du
To break this dilemma, we propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
1 code implementation • 19 Feb 2024 • Zhongjian Zhang, Mengmei Zhang, Yue Yu, Cheng Yang, Jiawei Liu, Chuan Shi
Furthermore, with GraphPAR, we quantify whether the fairness of each node is provable, i. e., predictions are always fair within a certain range of sensitive attribute semantics.
1 code implementation • 11 Feb 2024 • Mengmei Zhang, Mingwei Sun, Peng Wang, Shen Fan, Yanhu Mo, Xiaoxiao Xu, Hong Liu, Cheng Yang, Chuan Shi
Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and instruction-following capabilities, have catalyzed a revolutionary transformation across diverse fields, especially for open-ended tasks.
no code implementations • 30 Jan 2024 • Yibo Li, Xiao Wang, Yujie Xing, Shaohua Fan, Ruijia Wang, Yaoqi Liu, Chuan Shi
Recently, there has been an increasing interest in ensuring fairness on GNNs, but all of them are under the assumption that the training and testing data are under the same distribution, i. e., training data and testing data are from the same graph.
1 code implementation • 23 Jan 2024 • Yanhu Mo, Xiao Wang, Shaohua Fan, Chuan Shi
How can we fix it and encourage the current GCL to learn better invariant representations?
no code implementations • 18 Dec 2023 • Tianrui Jia, Haoyang Li, Cheng Yang, Tao Tao, Chuan Shi
In this paper, we propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy, which is capable of jointly generating mixed multiple environments and capturing invariant patterns from the mixed graph data.
Graph Representation Learning Out-of-Distribution Generalization
no code implementations • 14 Dec 2023 • Yibo Li, Xiao Wang, Hongrui Liu, Chuan Shi
In this paper, we propose a general diffusion equation framework with the fidelity term, which formally establishes the relationship between the diffusion process with more GNNs.
no code implementations • 23 Nov 2023 • Chunjing Gan, Binbin Hu, Bo Huang, Tianyu Zhao, Yingru Lin, Wenliang Zhong, Zhiqiang Zhang, Jun Zhou, Chuan Shi
In this paper, we highlight that both conformity and risk preference matter in making fund investment decisions beyond personal interest and seek to jointly characterize these aspects in a disentangled manner.
no code implementations • 18 Oct 2023 • Bo Yan, Yang Cao, Haoyu Wang, Wenchuan Yang, Junping Du, Chuan Shi
Existing HIN-based recommender systems operate under the assumption of centralized storage and model training.
no code implementations • 18 Oct 2023 • Jiawei Liu, Cheng Yang, Zhiyuan Lu, Junze Chen, Yibo Li, Mengmei Zhang, Ting Bai, Yuan Fang, Lichao Sun, Philip S. Yu, Chuan Shi
Foundation models have emerged as critical components in a variety of artificial intelligence applications, and showcase significant success in natural language processing and several other domains.
no code implementations • 13 Oct 2023 • Yang Liu, Deyu Bo, Chuan Shi
The increasing amount of graph data places requirements on the efficiency and scalability of graph neural networks (GNNs), despite their effectiveness in various graph-related applications.
no code implementations • 8 Oct 2023 • Yuxin Guo, Deyu Bo, Cheng Yang, Zhiyuan Lu, Zhongjian Zhang, Jixi Liu, Yufei Peng, Chuan Shi
Recently, instead of designing more complex neural architectures as model-centric approaches, the attention of AI community has shifted to data-centric ones, which focuses on better processing data to strengthen the ability of neural models.
1 code implementation • NeurIPS 2023 • Yue Yu, Xiao Wang, Mengmei Zhang, Nian Liu, Chuan Shi
To this end, we propose the PrOvable Training (POT) for GCL, which regularizes the training of GCL to encode node embeddings that follows the GCL principle better.
no code implementations • 24 Apr 2023 • Nian Liu, Xiao Wang, Hui Han, Chuan Shi
Specifically, two views of a HIN (network schema and meta-path views) are proposed to learn node embeddings, so as to capture both of local and high-order structures simultaneously.
no code implementations • 2 Apr 2023 • Bo Yan, Cheng Yang, Chuan Shi, Yong Fang, Qi Li, Yanfang Ye, Junping Du
In recent years, with the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance.
no code implementations • 2 Apr 2023 • Bo Yan, Cheng Yang, Chuan Shi, Jiawei Liu, Xiaochen Wang
AEHCL designs the intra-event and inter-event contrastive modules to exploit self-supervised AHIN information.
1 code implementation • 2 Mar 2023 • Deyu Bo, Chuan Shi, Lele Wang, Renjie Liao
To tackle these issues, we introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain, leading to a learnable set-to-set spectral filter.
no code implementations • 11 Feb 2023 • Deyu Bo, Xiao Wang, Yang Liu, Yuan Fang, Yawen Li, Chuan Shi
Graph neural networks (GNNs) have attracted considerable attention from the research community.
1 code implementation • 14 Dec 2022 • Xumeng Gong, Cheng Yang, Chuan Shi
We argue that typical data augmentation techniques (e. g., edge dropping) in GCL cannot generate diverse enough contrastive views to filter out noises.
no code implementations • 30 Nov 2022 • Yining Wang, Xumeng Gong, Shaochuan Li, Bing Yang, YiWu Sun, Chuan Shi, Yangang Wang, Cheng Yang, Hui Li, Le Song
Its improvement in both accuracy and efficiency makes it a valuable tool for de novo antibody design and could make further improvements in immuno-theory.
1 code implementation • 30 Nov 2022 • Shaohua Fan, Shuyang Zhang, Xiao Wang, Chuan Shi
In a dynamic graph, we propose to simultaneously estimate contemporaneous relationships and time-lagged interaction relationships between the node features.
1 code implementation • 6 Oct 2022 • Ruijia Wang, Xiao Wang, Chuan Shi, Le Song
Recent studies show that graph convolutional network (GCN) often performs worse for low-degree nodes, exhibiting the so-called structural unfairness for graphs with long-tailed degree distributions prevalent in the real world.
1 code implementation • 5 Oct 2022 • Nian Liu, Xiao Wang, Deyu Bo, Chuan Shi, Jian Pei
Then we theoretically prove that GCL is able to learn the invariance information by contrastive invariance theorem, together with our GAME rule, for the first time, we uncover that the learned representations by GCL essentially encode the low-frequency information, which explains why GCL works.
1 code implementation • 28 Sep 2022 • Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, Jian Tang
However, by presenting a graph classification investigation on the training graphs with severe bias, surprisingly, we discover that GNNs always tend to explore the spurious correlations to make decision, even if the causal correlation always exists.
no code implementations • 17 May 2022 • Binbin Hu, Zhiyang Hu, Zhiqiang Zhang, Jun Zhou, Chuan Shi
Knowledge representation learning has been commonly adopted to incorporate knowledge graph (KG) into various online services.
no code implementations • 8 May 2022 • Yuanxin Zhuang, Lingjuan Lyu, Chuan Shi, Carl Yang, Lichao Sun
Graph neural networks (GNNs) have been widely used in modeling graph structured data, owing to its impressive performance in a wide range of practical applications.
1 code implementation • 1 Mar 2022 • Qian Zhao, Shuo Yang, Binbin Hu, Zhiqiang Zhang, Yakun Wang, Yusong Chen, Jun Zhou, Chuan Shi
Temporal link prediction, as one of the most crucial work in temporal graphs, has attracted lots of attention from the research area.
1 code implementation • 18 Feb 2022 • Tianyu Zhao, Cheng Yang, Yibo Li, Quan Gan, Zhenyi Wang, Fengqi Liang, Huan Zhao, Yingxia Shao, Xiao Wang, Chuan Shi
Heterogeneous Graph Neural Network (HGNN) has been successfully employed in various tasks, but we cannot accurately know the importance of different design dimensions of HGNNs due to diverse architectures and applied scenarios.
1 code implementation • 27 Jan 2022 • Hongrui Liu, Binbin Hu, Xiao Wang, Chuan Shi, Zhiqiang Zhang, Jun Zhou
To this end, in this paper, we propose a novel Distribution Recovered Graph Self-Training framework (DR-GST), which could recover the distribution of the original labeled dataset.
no code implementations • 19 Jan 2022 • Shaohua Fan, Xiao Wang, Chuan Shi, Kun Kuang, Nian Liu, Bai Wang
Then to remove the bias in GNN estimation, we propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
2 code implementations • 14 Jan 2022 • Nian Liu, Xiao Wang, Lingfei Wu, Yu Chen, Xiaojie Guo, Chuan Shi
Furthermore, we maintain the performance of estimated views and the final view and reduce the mutual information of every two views.
1 code implementation • 20 Nov 2021 • Shaohua Fan, Xiao Wang, Chuan Shi, Peng Cui, Bai Wang
Graph Neural Networks (GNNs) are proposed without considering the agnostic distribution shifts between training and testing graphs, inducing the degeneration of the generalization ability of GNNs on Out-Of-Distribution (OOD) settings.
2 code implementations • NeurIPS 2021 • Xiao Wang, Hongrui Liu, Chuan Shi, Cheng Yang
Specifically, we first verify that the confidence distribution in a graph has homophily property, and this finding inspires us to design a calibration GNN model (CaGCN) to learn the calibration function.
1 code implementation • ACL 2021 • Linmei Hu, Tianchi Yang, Luhao Zhang, Wanjun Zhong, Duyu Tang, Chuan Shi, Nan Duan, Ming Zhou
Specifically, we first construct a \textit{directed heterogeneous document graph} for each news incorporating topics and entities.
3 code implementations • 19 May 2021 • Xiao Wang, Nian Liu, Hui Han, Chuan Shi
Then the cross-view contrastive learning, as well as a view mask mechanism, is proposed, which is able to extract the positive and negative embeddings from two views.
no code implementations • 15 Apr 2021 • Yiding Zhang, Xiao Wang, Chuan Shi, Nian Liu, Guojie Song
We also find that the performance of some hyperbolic GCNs can be improved by simply replacing the graph operations with those we defined in this paper.
1 code implementation • 4 Mar 2021 • Cheng Yang, Jiawei Liu, Chuan Shi
Our framework extracts the knowledge of an arbitrary learned GNN model (teacher model), and injects it into a well-designed student model.
Ranked #1 on Node Classification on Cora (0.5%)
1 code implementation • 28 Jan 2021 • Meiqi Zhu, Xiao Wang, Chuan Shi, Houye Ji, Peng Cui
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks.
1 code implementation • 4 Jan 2021 • Deyu Bo, Xiao Wang, Chuan Shi, HuaWei Shen
For a deeper understanding, we theoretically analyze the roles of low-frequency signals and high-frequency signals on learning node representations, which further explains why FAGCN can perform well on different types of networks.
no code implementations • 1 Jan 2021 • Xiaojun Ma, Ziyao Li, Lingjun Xu, Guojie Song, Yi Li, Chuan Shi
To address this weakness, we introduce a novel framework of conducting graph convolutions, where nodes are discretely selected among multi-hop neighborhoods to construct adaptive receptive fields (ARFs).
no code implementations • 4 Dec 2020 • Junshan Wang, Ziyao Li, Qingqing Long, Weiyu Zhang, Guojie Song, Chuan Shi
Since noises are often unknown on real graphs, we design two generators, namely a graph generator and a noise generator, to identify normal structures and noises in an unsupervised setting.
no code implementations • 30 Nov 2020 • Xiao Wang, Deyu Bo, Chuan Shi, Shaohua Fan, Yanfang Ye, Philip S. Yu
Heterogeneous graphs (HGs) also known as heterogeneous information networks have become ubiquitous in real-world scenarios; therefore, HG embedding, which aims to learn representations in a lower-dimension space while preserving the heterogeneous structures and semantics for downstream tasks (e. g., node/graph classification, node clustering, link prediction), has drawn considerable attentions in recent years.
no code implementations • 6 Oct 2020 • Guanglin Niu, Bo Li, Yongfei Zhang, Yongpan Sheng, Chuan Shi, Jingyang Li, ShiLiang Pu
Inference on a large-scale knowledge graph (KG) is of great importance for KG applications like question answering.
no code implementations • 2 Sep 2020 • Jinghan Shi, Houye Ji, Chuan Shi, Xiao Wang, Zhiqiang Zhang, Jun Zhou
The prosperous development of e-commerce has spawned diverse recommendation systems.
no code implementations • 5 Jul 2020 • Xiao Wang, Meiqi Zhu, Deyu Bo, Peng Cui, Chuan Shi, Jian Pei
We tackle the challenge and propose an adaptive multi-channel graph convolutional networks for semi-supervised classification (AM-GCN).
1 code implementation • ACL 2020 • Linmei Hu, Siyong Xu, Chen Li, Cheng Yang, Chuan Shi, Nan Duan, Xing Xie, Ming Zhou
Furthermore, the learned representations are disentangled with latent preference factors by a neighborhood routing algorithm, which can enhance expressiveness and interpretability.
1 code implementation • 29 Jun 2020 • Xiao Wang, Shaohua Fan, Kun Kuang, Chuan Shi, Jiawei Liu, Bai Wang
Most of existing clustering algorithms are proposed without considering the selection bias in data.
2 code implementations • 5 Feb 2020 • Deyu Bo, Xiao Wang, Chuan Shi, Meiqi Zhu, Emiao Lu, Peng Cui
The strength of deep clustering methods is to extract the useful representations from the data itself, rather than the structure of data, which receives scarce attention in representation learning.
1 code implementation • 6 Dec 2019 • Yiding Zhang, Xiao Wang, Xunqiang Jiang, Chuan Shi, Yanfang Ye
Graph neural network (GNN) has shown superior performance in dealing with graphs, which has attracted considerable research attention recently.
1 code implementation • 25 Nov 2019 • Xiao Wang, Ruijia Wang, Chuan Shi, Guojie Song, Qingyong Li
The interactions of users and items in recommender system could be naturally modeled as a user-item bipartite graph.
no code implementations • 18 Nov 2019 • Yilun Jin, Guojie Song, Chuan Shi
Specifically, we capture local graph structures via random anonymous walks, powerful and flexible tools that represent structural patterns.
no code implementations • IJCNLP 2019 • Linmei Hu, Luhao Zhang, Chuan Shi, Liqiang Nie, Weili Guan, Cheng Yang
Distantly-supervised relation extraction has proven to be effective to find relational facts from texts.
no code implementations • IJCNLP 2019 • Hu Linmei, Tianchi Yang, Chuan Shi, Houye Ji, Xiao-Li Li
Then, we propose Heterogeneous Graph ATtention networks (HGAT) to embed the HIN for short text classification based on a dual-level attention mechanism, including node-level and type-level attentions.
no code implementations • 30 Oct 2019 • Linmei Hu, Chen Li, Chuan Shi, Cheng Yang, Chao Shao
Existing methods on news recommendation mainly include collaborative filtering methods which rely on direct user-item interactions and content based methods which characterize the content of user reading history.
no code implementations • 14 Sep 2019 • Chuan Shi, Xiaotian Han, Li Song, Xiao Wang, Senzhang Wang, Junping Du, Philip S. Yu
However, the characteristics of users and the properties of items may stem from different aspects, e. g., the brand-aspect and category-aspect of items.
1 code implementation • 10 Sep 2019 • Yuanfu Lu, Xiao Wang, Chuan Shi, Philip S. Yu, Yanfang Ye
The micro-dynamics describe the formation process of network structures in a detailed manner, while the macro-dynamics refer to the evolution pattern of the network scale.
no code implementations • 15 May 2019 • Yuanfu Lu, Chuan Shi, Linmei Hu, Zhiyuan Liu
In this paper, we take the structural characteristics of heterogeneous relations into consideration and propose a novel Relation structure-aware Heterogeneous Information Network Embedding model (RHINE).
3 code implementations • WWW 2019 2019 • Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Peng Cui, P. Yu, Yanfang Ye
With the learned importance from both node-level and semantic-level attention, the importance of node and meta-path can be fully considered.
Ranked #1 on Heterogeneous Node Classification on DBLP (PACT) 14k
Social and Information Networks
1 code implementation • 29 Nov 2017 • Chuan Shi, Binbin Hu, Wayne Xin Zhao, Philip S. Yu
In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec.
Social and Information Networks
no code implementations • 28 Sep 2013 • Chuan Shi, Xiangnan Kong, Yue Huang, Philip S. Yu, Bin Wu
Similarity search is an important function in many applications, which usually focuses on measuring the similarity between objects with the same type.