Search Results for author: Baokun Wang

Found 6 papers, 2 papers with code

LasTGL: An Industrial Framework for Large-Scale Temporal Graph Learning

no code implementations28 Nov 2023 Jintang Li, Jiawang Dan, Ruofan Wu, Jing Zhou, Sheng Tian, Yunfei Liu, Baokun Wang, Changhua Meng, Weiqiang Wang, Yuchang Zhu, Liang Chen, Zibin Zheng

Over the past few years, graph neural networks (GNNs) have become powerful and practical tools for learning on (static) graph-structure data.

Graph Learning

Hetero$^2$Net: Heterophily-aware Representation Learning on Heterogenerous Graphs

no code implementations18 Oct 2023 Jintang Li, Zheng Wei, Jiawang Dan, Jing Zhou, Yuchang Zhu, Ruofan Wu, Baokun Wang, Zhang Zhen, Changhua Meng, Hong Jin, Zibin Zheng, Liang Chen

Through in-depth investigations on several real-world heterogeneous graphs exhibiting varying levels of heterophily, we have observed that heterogeneous graph neural networks (HGNNs), which inherit many mechanisms from GNNs designed for homogeneous graphs, fail to generalize to heterogeneous graphs with heterophily or low level of homophily.

Node Classification Representation Learning

Self-supervision meets kernel graph neural models: From architecture to augmentations

no code implementations17 Oct 2023 Jiawang Dan, Ruofan Wu, Yunpeng Liu, Baokun Wang, Changhua Meng, Tengfei Liu, Tianyi Zhang, Ningtao Wang, Xing Fu, Qi Li, Weiqiang Wang

Recently, the idea of designing neural models on graphs using the theory of graph kernels has emerged as a more transparent as well as sometimes more expressive alternative to MPNNs known as kernel graph neural networks (KGNNs).

Data Augmentation Graph Classification +2

A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks

1 code implementation30 May 2023 Jintang Li, Huizhe Zhang, Ruofan Wu, Zulun Zhu, Baokun Wang, Changhua Meng, Zibin Zheng, Liang Chen

While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of higher task accuracy requires a larger hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications.

Contrastive Learning Self-Supervised Learning

DEDGAT: Dual Embedding of Directed Graph Attention Networks for Detecting Financial Risk

no code implementations6 Mar 2023 Jiafu Wu, Mufeng Yao, Dong Wu, Mingmin Chi, Baokun Wang, Ruofan Wu, Xin Fu, Changhua Meng, Weiqiang Wang

Graph representation plays an important role in the field of financial risk control, where the relationship among users can be constructed in a graph manner.

Graph Attention

Cannot find the paper you are looking for? You can Submit a new open access paper.