no code implementations • 2 May 2024 • Yufei Jin, Xingquan Zhu
Oversmoothing is a commonly observed challenge in graph neural network (GNN) learning, where, as layers increase, embedding features learned from GNNs quickly become similar/indistinguishable, making them incapable of differentiating network proximity.
no code implementations • 26 Feb 2024 • Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan
Graph learning plays a pivotal role and has gained significant attention in various application scenarios, from social network analysis to recommendation systems, for its effectiveness in modeling complex data relations represented by graph structural data.
1 code implementation • 4 Nov 2023 • Zhiqiang Wang, Yiran Pang, Cihan Ulus, Xingquan Zhu
In this paper, we propose a deep learning based crowd counting approach to automatically count number of manatees within a region, by using low quality images as input.
1 code implementation • NeurIPS 2023 • Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan
Specifically, SFGC contains two collaborative components: (1) a training trajectory meta-matching scheme for effectively synthesizing small-scale graph-free data; (2) a graph neural feature score metric for dynamically evaluating the quality of the condensed data.
no code implementations • 19 Nov 2022 • Zhabiz Gharibshah, Xingquan Zhu
Contrastive self-supervised learning has been successfully used in many domains, such as images, texts, graphs, etc., to learn features without requiring label information.
no code implementations • 25 Jul 2022 • Pengfei Ma, Youxi Wu, Yan Li, Lei Guo, He Jiang, Xingquan Zhu, Xindong Wu
To screen out redundant feature vectors, we introduce a hashing screening mechanism for multi-grained scanning and propose a model called HW-Forest which adopts two strategies, hashing screening and window screening.
no code implementations • 9 Jan 2022 • Youxi Wu, Qian Hu, Yan Li, Lei Guo, Xingquan Zhu, Xindong Wu
To discover patterns, existing methods often convert time series data into another form, such as nominal/symbolic format, to reduce dimensionality, which inevitably deviates the data values.
no code implementations • 12 Nov 2021 • Yu Huang, Chao Zhang, Jaswanth Yella, Sergei Petrov, Xiaoye Qian, Yufei Tang, Xingquan Zhu, Sthitie Bom
In the era of big data, data-driven based classification has become an essential method in smart manufacturing to guide production and optimize inspection.
no code implementations • 12 Aug 2021 • Yu Huang, James Li, Min Shi, Hanqi Zhuang, Xingquan Zhu, Laurent Chérubin, James VanZwieten, Yufei Tang
A spatio-temporal physics-coupled neural network (ST-PCNN) model is proposed to achieve three goals: (1) learning the underlying physics parameters, (2) transition of local information between spatio-temporal regions, and (3) forecasting future values for the dynamical system.
no code implementations • 11 Aug 2021 • Yu Huang, Yufei Tang, Xingquan Zhu, Min Shi, Ali Muhamed Ali, Hanqi Zhuang, Laurent Cherubin
To tackle these challenges, we advocate a spatio-temporal physics-coupled neural networks (ST-PCNN) model to learn the underlying physics of the dynamical system and further couple the learned physics to assist the learning of the recurring dynamics.
no code implementations • 16 Jun 2021 • Shuwen Wang, Xingquan Zhu
Hospital readmission prediction is a study to learn models from historical medical data to predict probability of a patient returning to hospital in a certain period, 30 or 90 days, after the discharge.
no code implementations • 8 Mar 2021 • Man Wu, Shirui Pan, Lan Du, Xingquan Zhu
By generating multiple graphs at different distance levels, based on the adjacency matrix, we develop a long-short distance attention model to model these graphs.
no code implementations • 7 Jan 2021 • Zhabiz Gharibshah, Xingquan Zhu
What type of data are available for user response prediction?
no code implementations • 14 Oct 2020 • Zhabiz Gharibshah, Xingquan Zhu
We argue that tripartite networks are common in real world applications, and the essential challenge of the representation learning is the heterogeneous relations between various node types and links in the network.
1 code implementation • 21 Sep 2020 • Min Shi, David A. Wilson, Xingquan Zhu, Yu Huang, Yuan Zhuang, Jianxun Liu, Yufei Tang
In particular, Neural Architecture Search (NAS) has seen significant attention throughout the AutoML research community, and has pushed forward the state-of-the-art in a number of neural models to address grid-like data such as texts and images.
no code implementations • 26 Dec 2019 • Min Shi, Yufei Tang, Xingquan Zhu, Jianxun Liu
By using spectral-based graph convolution aggregation process, each node is allowed to concentrate more on the most determining neighborhood features aligned with the corresponding learning task.
no code implementations • 26 Dec 2019 • Min Shi, Yufei Tang, Xingquan Zhu, Jianxun Liu
The multi-label network nodes not only have multiple labels for each node, such labels are often highly correlated making existing methods ineffective or fail to handle such correlation for node representation learning.
Ranked #30 on Multi-Label Classification on MS-COCO
no code implementations • 21 Oct 2019 • Jorge Agnese, Jonathan Herrera, Haicheng Tao, Xingquan Zhu
Text-to-image synthesis refers to computational methods which translate human written textual descriptions, in the form of keywords or sentences, into images with similar semantic meaning to the text.
1 code implementation • 14 Jan 2019 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space.
Ranked #1 on Node Clustering on Facebook
1 code implementation • 14 Jan 2019 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
In this paper, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a three-layer neural network.
2 code implementations • 16 Oct 2018 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
In this paper, we propose a Scalable Incomplete Network Embedding (SINE) algorithm for learning node representations from incomplete graphs.
Social and Information Networks
no code implementations • 7 Mar 2018 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes.
Social and Information Networks
no code implementations • 4 Dec 2017 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information.
no code implementations • CIKM '17 Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017 • Chun Wang, Shirui Pan, Guodong Long, Xingquan Zhu, Jing Jiang
In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering.
no code implementations • 11 Mar 2014 • Meng Fang, Jie Yin, Xingquan Zhu
In this paper, we propose a new transfer learning algorithm that attempts to transfer common latent structure features across the source and target networks.