no code implementations • 5 May 2024 • Xiyuan Wang, Pan Li, Muhan Zhang
In contrast, this paper introduces a novel graph-to-set conversion method that bijectively transforms interconnected nodes into a set of independent points and then uses a set encoder to learn the graph representation.
no code implementations • 7 Feb 2024 • Zian Li, Xiyuan Wang, Shijia Kang, Muhan Zhang
Our results fill the gap in the theoretical power of invariant models, contributing to a rigorous and comprehensive understanding of their capabilities.
no code implementations • 4 Feb 2024 • Zhou Cai, Xiyuan Wang, Muhan Zhang
We first propose Latent Graph Diffusion (LGD), a generative model that can generate node, edge, and graph-level features of all categories simultaneously.
1 code implementation • 28 Nov 2023 • Xiyuan Wang, Muhan Zhang
We introduce PyTorch Geometric High Order (PyGHO), a library for High Order Graph Neural Networks (HOGNNs) that extends PyTorch Geometric (PyG).
1 code implementation • NeurIPS 2023 • Cai Zhou, Xiyuan Wang, Muhan Zhang
Second, on $1$-simplices or edge level, we bridge edge-level random walk and Hodge $1$-Laplacians and design corresponding edge PE respectively.
1 code implementation • NeurIPS 2023 • Junru Zhou, Jiarui Feng, Xiyuan Wang, Muhan Zhang
Many of the proposed GNN models with provable cycle counting power are based on subgraph GNNs, i. e., extracting a bag of subgraphs from the input graph, generating representations for each subgraph, and using them to augment the representation of the input graph.
no code implementations • 24 May 2023 • Xiyuan Wang, Fangyuan Wang, Bo Xu, Liang Xu, Jing Xiao
Typically, the Time-Delay Neural Network (TDNN) and Transformer can serve as a backbone for Speaker Verification (SV).
1 code implementation • 8 May 2023 • Cai Zhou, Xiyuan Wang, Muhan Zhang
Relational pooling is a framework for building more expressive and permutation-invariant graph neural networks.
no code implementations • 20 Apr 2023 • Xiyuan Wang, Pan Li, Muhan Zhang
When we want to learn a node-set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN.
1 code implementation • NeurIPS 2023 • Zian Li, Xiyuan Wang, Yinan Huang, Muhan Zhang
In this work, we first construct families of novel and symmetric geometric graphs that Vanilla DisGNN cannot distinguish even when considering all-pair distances, which greatly expands the existing counterexample families.
1 code implementation • 2 Feb 2023 • Xiyuan Wang, Haotong Yang, Muhan Zhang
In this work, we propose a novel link prediction model and further boost it by studying graph incompleteness.
Ranked #1 on Link Property Prediction on ogbl-ddi
1 code implementation • 1 Aug 2022 • Xiyuan Wang, Muhan Zhang
Projected onto a frame, equivariant features like 3D coordinates are converted to invariant features, so that we can capture geometric information with these projections and decouple the symmetry requirement from GNN design.
1 code implementation • 20 Jun 2022 • Yang Hu, Xiyuan Wang, Zhouchen Lin, Pan Li, Muhan Zhang
As pointed out by previous works, this two-step procedure results in low discriminating power, as 1-WL-GNNs by nature learn node-level representations instead of link-level.
2 code implementations • 23 May 2022 • Xiyuan Wang, Muhan Zhang
We also establish a connection between the expressive power of spectral GNNs and Graph Isomorphism (GI) testing, the latter of which is often used to characterize spatial GNNs' expressive power.
no code implementations • ICLR 2022 • Xiyuan Wang, Muhan Zhang
And training a GLASS model only takes 28% time needed for a SubGNN on average.
no code implementations • 22 May 2021 • Zhenyu Zhang, Yuanyuan Dong, Keping Long, Xiyuan Wang, Xiaoming Dai
Decentralized baseband processing (DBP) architecture, which partitions the base station antennas into multiple antenna clusters, has been recently proposed to alleviate the excessively high interconnect bandwidth, chip input/output data rates, and detection complexity for massive multi-user multiple-input multiple-output (MU-MIMO) systems.