1 code implementation • 26 Jun 2023 • Gaotang Li, Marlena Duda, Xiang Zhang, Danai Koutra, Yujun Yan
Based on these insights, we propose a new model, Interpretable Graph Sparsification (IGS), which enhances graph classification performance by up to 5. 1% with 55. 0% fewer edges.
no code implementations • 24 May 2023 • Gaotang Li, Danai Koutra, Yujun Yan
Our empirical results reveal that our proposed size-insensitive attention strategy substantially enhances graph classification performance on large test graphs, which are 2-10 times larger than the training graphs, resulting in an improvement in F1 scores by up to 8%.
1 code implementation • 30 Nov 2021 • Yefan Zhou, Yiru Shen, Yujun Yan, Chen Feng, Yaoqing Yang
Our finding shows that a leading factor in determining recognition versus reconstruction is how dispersed the training data is.
no code implementations • 5 Nov 2021 • Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang, Danai Koutra
Unsupervised graph representation learning is critical to a wide range of applications where labels may be scarce or expensive to procure.
1 code implementation • 12 Feb 2021 • Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, Danai Koutra
We are the first to take a unified perspective to jointly explain the oversmoothing and heterophily problems at the node level.
1 code implementation • 26 Aug 2020 • Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Michael W. Mahoney
Replacing BN with the recently-proposed Group Normalization (GN) can reduce gradient diversity and improve test accuracy.
4 code implementations • NeurIPS 2020 • Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, Danai Koutra
We investigate the representation power of graph neural networks in the semi-supervised node classification task under heterophily or low homophily, i. e., in networks where connected nodes may have different class labels and dissimilar features.
1 code implementation • NeurIPS 2020 • Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, Milad Hashemi
A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms.
no code implementations • ICLR 2020 • Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, Milad Hashemi
Turing complete computation and reasoning are often regarded as necessary pre- cursors to general intelligence.