no code implementations • 31 Oct 2023 • Gaichao Li, Jinsong Chen, John E. Hopcroft, Kun He
Graph pooling methods have been widely used on downsampling graphs, achieving impressive results on multiple graph-level tasks like graph classification and graph generation.
no code implementations • 17 Oct 2023 • Jinsong Chen, Gaichao Li, John E. Hopcroft, Kun He
In this way, SignGT could learn informative node representations from both long-range dependencies and local topology information.
Ranked #4 on Node Classification on Actor
no code implementations • 22 May 2023 • Jinsong Chen, Chang Liu, Kaiyuan Gao, Gaichao Li, Kun He
Graph Transformers, emerging as a new architecture for graph representation learning, suffer from the quadratic complexity on the number of nodes when handling large graphs.
no code implementations • 15 Nov 2022 • Gaichao Li, Jinsong Chen, Kun He
MNA-GT further employs an attention layer to learn the importance of different attention kernels to enable the model to adaptively capture the graph structural information for different nodes.
no code implementations • 15 Nov 2022 • Jinsong Chen, Boyu Li, Kun He
The decoupled Graph Convolutional Network (GCN), a recent development of GCN that decouples the neighborhood aggregation and feature transformation in each convolutional layer, has shown promising performance for graph representation learning.
no code implementations • 21 Jun 2022 • Jinsong Chen, Boyu Li, Qiuting He, Kun He
However, they follow the traditional structure-aware propagation strategy of GCNs, making it hard to capture the attribute correlation of nodes and sensitive to the structure noise described by edges whose two endpoints belong to different categories.
1 code implementation • 10 Jun 2022 • Jinsong Chen, Kaiyuan Gao, Gaichao Li, Kun He
In this work, we observe that existing graph Transformers treat nodes as independent tokens and construct a single long sequence composed of all node tokens so as to train the Transformer model, causing it hard to scale to large graphs due to the quadratic complexity on the number of nodes for the self-attention computation.
no code implementations • 7 Aug 2018 • Kai Chen, Twan van Laarhoven, Perry Groot, Jinsong Chen, Elena Marchiori
The resulting kernel is called Multi-Output Convolution Spectral Mixture (MOCSM) kernel.