Search Results for author: Xingquan Zhu

Found 24 papers, 6 papers with code

Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning

no code implementations26 Feb 2024 Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan

Graph learning plays a pivotal role and has gained significant attention in various application scenarios, from social network analysis to recommendation systems, for its effectiveness in modeling complex data relations represented by graph structural data.

Continual Learning Domain Adaptation +2

Counting Manatee Aggregations using Deep Neural Networks and Anisotropic Gaussian Kernel

1 code implementation4 Nov 2023 Zhiqiang Wang, Yiran Pang, Cihan Ulus, Xingquan Zhu

In this paper, we propose a deep learning based crowd counting approach to automatically count number of manatees within a region, by using low quality images as input.

Crowd Counting Scene Recognition +1

Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data

1 code implementation NeurIPS 2023 Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan

Specifically, SFGC contains two collaborative components: (1) a training trajectory meta-matching scheme for effectively synthesizing small-scale graph-free data; (2) a graph neural feature score metric for dynamically evaluating the quality of the condensed data.

Graph Learning

Local Contrastive Feature learning for Tabular Data

no code implementations19 Nov 2022 Zhabiz Gharibshah, Xingquan Zhu

Contrastive self-supervised learning has been successfully used in many domains, such as images, texts, graphs, etc., to learn features without requiring label information.

Self-Supervised Learning

Deep Forest with Hashing Screening and Window Screening

no code implementations25 Jul 2022 Pengfei Ma, Youxi Wu, Yan Li, Lei Guo, He Jiang, Xingquan Zhu, Xindong Wu

To screen out redundant feature vectors, we introduce a hashing screening mechanism for multi-grained scanning and propose a model called HW-Forest which adopts two strategies, hashing screening and window screening.

OPP-Miner: Order-preserving sequential pattern mining

no code implementations9 Jan 2022 Youxi Wu, Qian Hu, Yan Li, Lei Guo, Xingquan Zhu, Xindong Wu

To discover patterns, existing methods often convert time series data into another form, such as nominal/symbolic format, to reduce dimensionality, which inevitably deviates the data values.

Sequential Pattern Mining Time Series +1

GraSSNet: Graph Soft Sensing Neural Networks

no code implementations12 Nov 2021 Yu Huang, Chao Zhang, Jaswanth Yella, Sergei Petrov, Xiaoye Qian, Yufei Tang, Xingquan Zhu, Sthitie Bom

In the era of big data, data-driven based classification has become an essential method in smart manufacturing to guide production and optimize inspection.

Time Series Time Series Analysis +1

ST-PCNN: Spatio-Temporal Physics-Coupled Neural Networks for Dynamics Forecasting

no code implementations12 Aug 2021 Yu Huang, James Li, Min Shi, Hanqi Zhuang, Xingquan Zhu, Laurent Chérubin, James VanZwieten, Yufei Tang

A spatio-temporal physics-coupled neural network (ST-PCNN) model is proposed to achieve three goals: (1) learning the underlying physics parameters, (2) transition of local information between spatio-temporal regions, and (3) forecasting future values for the dynamical system.

Physics-Coupled Spatio-Temporal Active Learning for Dynamical Systems

no code implementations11 Aug 2021 Yu Huang, Yufei Tang, Xingquan Zhu, Min Shi, Ali Muhamed Ali, Hanqi Zhuang, Laurent Cherubin

To tackle these challenges, we advocate a spatio-temporal physics-coupled neural networks (ST-PCNN) model to learn the underlying physics of the dynamical system and further couple the learned physics to assist the learning of the recurring dynamics.

Active Learning Spatio-Temporal Forecasting

Predictive Modeling of Hospital Readmission: Challenges and Solutions

no code implementations16 Jun 2021 Shuwen Wang, Xingquan Zhu

Hospital readmission prediction is a study to learn models from historical medical data to predict probability of a patient returning to hospital in a certain period, 30 or 90 days, after the discharge.

Decision Making Readmission Prediction

Learning Graph Neural Networks with Positive and Unlabeled Nodes

no code implementations8 Mar 2021 Man Wu, Shirui Pan, Lan Du, Xingquan Zhu

By generating multiple graphs at different distance levels, based on the adjacency matrix, we develop a long-short distance attention model to model these graphs.

Node Classification Transductive Learning

TriNE: Network Representation Learning for Tripartite Heterogeneous Networks

no code implementations14 Oct 2020 Zhabiz Gharibshah, Xingquan Zhu

We argue that tripartite networks are common in real world applications, and the essential challenge of the representation learning is the heterogeneous relations between various node types and links in the network.

Network Embedding TAG

Evolutionary Architecture Search for Graph Neural Networks

1 code implementation21 Sep 2020 Min Shi, David A. Wilson, Xingquan Zhu, Yu Huang, Yuan Zhuang, Jianxun Liu, Yufei Tang

In particular, Neural Architecture Search (NAS) has seen significant attention throughout the AutoML research community, and has pushed forward the state-of-the-art in a number of neural models to address grid-like data such as texts and images.

Neural Architecture Search Representation Learning

Multi-Label Graph Convolutional Network Representation Learning

no code implementations26 Dec 2019 Min Shi, Yufei Tang, Xingquan Zhu, Jianxun Liu

The multi-label network nodes not only have multiple labels for each node, such labels are often highly correlated making existing methods ineffective or fail to handle such correlation for node representation learning.

Multi-Label Classification Node Classification +1

Feature-Attention Graph Convolutional Networks for Noise Resilient Learning

no code implementations26 Dec 2019 Min Shi, Yufei Tang, Xingquan Zhu, Jianxun Liu

By using spectral-based graph convolution aggregation process, each node is allowed to concentrate more on the most determining neighborhood features aligned with the corresponding learning task.

Feature Importance

A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis

no code implementations21 Oct 2019 Jorge Agnese, Jonathan Herrera, Haicheng Tao, Xingquan Zhu

Text-to-image synthesis refers to computational methods which translate human written textual descriptions, in the form of keywords or sentences, into images with similar semantic meaning to the text.

Image Generation Object Reconstruction

Attributed Network Embedding via Subspace Discovery

1 code implementation14 Jan 2019 Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang

In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space.

Attribute Clustering +4

Search Efficient Binary Network Embedding

1 code implementation14 Jan 2019 Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang

In this paper, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a three-layer neural network.

Attribute Network Embedding +2

SINE: Scalable Incomplete Network Embedding

2 code implementations16 Oct 2018 Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang

In this paper, we propose a Scalable Incomplete Network Embedding (SINE) algorithm for learning node representations from incomplete graphs.

Social and Information Networks

MetaGraph2Vec: Complex Semantic Path Augmented Heterogeneous Network Embedding

no code implementations7 Mar 2018 Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang

Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes.

Social and Information Networks

Network Representation Learning: A Survey

no code implementations4 Dec 2017 Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang

Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information.

Representation Learning

Transfer Learning across Networks for Collective Classification

no code implementations11 Mar 2014 Meng Fang, Jie Yin, Xingquan Zhu

In this paper, we propose a new transfer learning algorithm that attempts to transfer common latent structure features across the source and target networks.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.