Search Results for author: Quan Gan

Found 27 papers, 12 papers with code

Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation

no code implementations29 Feb 2024 Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

Changes in facial expression, head movement, body movement and gesture movement are remarkable cues in sign language recognition, and most of the current continuous sign language recognition(CSLR) research methods mainly focus on static images in video sequences at the frame-level feature extraction stage, while ignoring the dynamic changes in the images.

Sign Language Recognition

GFS: Graph-based Feature Synthesis for Prediction over Relational Databases

no code implementations4 Dec 2023 Han Zhang, Quan Gan, David Wipf, Weinan Zhang

Consequently, the prevalent approach for training machine learning models on data stored in relational databases involves performing feature engineering to merge the data from multiple tables into a single table and subsequently applying single table models.

Feature Engineering Inductive Bias

GNNFlow: A Distributed Framework for Continuous Temporal GNN Learning on Dynamic Graphs

1 code implementation29 Nov 2023 Yuchen Zhong, Guangming Sheng, Tianzuo Qin, Minjie Wang, Quan Gan, Chuan Wu

We introduce GNNFlow, a distributed framework that enables efficient continuous temporal graph representation learning on dynamic graphs on multi-GPU machines.

Graph Learning Graph Representation Learning +1

From Hypergraph Energy Functions to Hypergraph Neural Networks

1 code implementation16 Jun 2023 Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, David Wipf

Hypergraphs are a powerful abstraction for representing higher-order interactions between entities of interest.

Bilevel Optimization Node Classification

Continuous sign language recognition based on cross-resolution knowledge distillation

no code implementations13 Mar 2023 Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

It is then used to combine cross-resolution knowledge distillation and traditional knowledge distillation methods to form a CSLR model based on cross-resolution knowledge distillation (CRKD).

Knowledge Distillation Sign Language Recognition

FreshGNN: Reducing Memory Access via Stable Historical Embeddings for Graph Neural Network Training

no code implementations18 Jan 2023 Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang

A key performance bottleneck when training graph neural network (GNN) models on large, real-world graphs is loading node features onto a GPU.

Refined Edge Usage of Graph Neural Networks for Edge Prediction

no code implementations25 Dec 2022 Jiarui Jin, Yangkun Wang, Weinan Zhang, Quan Gan, Xiang Song, Yong Yu, Zheng Zhang, David Wipf

However, existing methods lack elaborate design regarding the distinctions between two tasks that have been frequently overlooked: (i) edges only constitute the topology in the node classification task but can be used as both the topology and the supervisions (i. e., labels) in the edge prediction task; (ii) the node classification makes prediction over each individual node, while the edge prediction is determinated by each pair of nodes.

Link Prediction Node Classification

Temporal superimposed crossover module for effective continuous sign language

no code implementations7 Nov 2022 Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

The ultimate goal of continuous sign language recognition(CSLR) is to facilitate the communication between special people and normal people, which requires a certain degree of real-time and deploy-ability of the model.

Image Classification Sign Language Recognition +1

Continuous Sign Language Recognition via Temporal Super-Resolution Network

no code implementations3 Jul 2022 Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

The sparse frame-level features are fused through the features obtained by the two designed branches as the reconstructed dense frame-level feature sequence, and the connectionist temporal classification(CTC) loss is used for training and optimization after the time-series feature extraction part.

Sign Language Recognition Super-Resolution +2

Descent Steps of a Relation-Aware Energy Produce Heterogeneous Graph Neural Networks

1 code implementation22 Jun 2022 Hongjoon Ahn, Yongyi Yang, Quan Gan, Taesup Moon, David Wipf

Moreover, the complexity of this trade-off is compounded in the heterogeneous graph case due to the disparate heterophily relationships between nodes of different types.

Bilevel Optimization Classification +2

CEP3: Community Event Prediction with Neural Point Process on Graph

no code implementations21 May 2022 Xuhong Wang, Sirui Chen, Yixuan He, Minjie Wang, Quan Gan, Yupu Yang, Junchi Yan

Many real world applications can be formulated as event forecasting on Continuous Time Dynamic Graphs (CTDGs) where the occurrence of a timed event between two entities is represented as an edge along with its occurrence timestamp in the graphs. However, most previous works approach the problem in compromised settings, either formulating it as a link prediction task on the graph given the event time or a time prediction problem given which event will happen next.

Link Prediction

Multi-scale temporal network for continuous sign language recognition

no code implementations8 Apr 2022 Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

The time-wise feature extraction part performs temporal feature learning by first extracting temporal receptive field features of different scales using the proposed multi-scale temporal block (MST-block) to improve the temporal modeling capability, and then further encoding the temporal features of different scales by the transformers module to obtain more accurate temporal features.

Sign Language Recognition

Space4HGNN: A Novel, Modularized and Reproducible Platform to Evaluate Heterogeneous Graph Neural Network

1 code implementation18 Feb 2022 Tianyu Zhao, Cheng Yang, Yibo Li, Quan Gan, Zhenyi Wang, Fengqi Liang, Huan Zhao, Yingxia Shao, Xiao Wang, Chuan Shi

Heterogeneous Graph Neural Network (HGNN) has been successfully employed in various tasks, but we cannot accurately know the importance of different design dimensions of HGNNs due to diverse architectures and applied scenarios.

GNNRank: Learning Global Rankings from Pairwise Comparisons via Directed Graph Neural Networks

1 code implementation1 Feb 2022 Yixuan He, Quan Gan, David Wipf, Gesine Reinert, Junchi Yan, Mihai Cucuringu

In this paper, we introduce neural networks into the ranking recovery problem by proposing the so-called GNNRank, a trainable GNN-based framework with digraph embedding.

Inductive Bias

Why Propagate Alone? Parallel Use of Labels and Features on Graphs

no code implementations ICLR 2022 Yangkun Wang, Jiarui Jin, Weinan Zhang, Yongyi Yang, Jiuhai Chen, Quan Gan, Yong Yu, Zheng Zhang, Zengfeng Huang, David Wipf

In this regard, it has recently been proposed to use a randomly-selected portion of the training labels as GNN inputs, concatenated with the original node features for making predictions on the remaining labels.

Node Property Prediction Property Prediction

Inductive Relation Prediction Using Analogy Subgraph Embeddings

no code implementations ICLR 2022 Jiarui Jin, Yangkun Wang, Kounianhua Du, Weinan Zhang, Zheng Zhang, David Wipf, Yong Yu, Quan Gan

Prevailing methods for relation prediction in heterogeneous graphs aim at learning latent representations (i. e., embeddings) of observed nodes and relations, and thus are limited to the transductive setting where the relation types must be known during training.

Inductive Bias Inductive Relation Prediction +1

TraverseNet: Unifying Space and Time in Message Passing for Traffic Forecasting

1 code implementation25 Aug 2021 Zonghan Wu, Da Zheng, Shirui Pan, Quan Gan, Guodong Long, George Karypis

This paper aims to unify spatial dependency and temporal dependency in a non-Euclidean space while capturing the inner spatial-temporal dependencies for traffic data.

Attribute

Graph Neural Networks Inspired by Classical Iterative Algorithms

1 code implementation10 Mar 2021 Yongyi Yang, Tang Liu, Yangkun Wang, Jinjing Zhou, Quan Gan, Zhewei Wei, Zheng Zhang, Zengfeng Huang, David Wipf

Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e. g., as can occur as a result of graph heterophily or adversarial attacks.

Node Classification

A Biased Graph Neural Network Sampler with Near-Optimal Regret

1 code implementation NeurIPS 2021 Qingru Zhang, David Wipf, Quan Gan, Le Song

Graph neural networks (GNN) have recently emerged as a vehicle for applying deep network architectures to graph and relational data.

DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs

1 code implementation11 Oct 2020 Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, George Karypis

To minimize the overheads associated with distributed computations, DistDGL uses a high-quality and light-weight min-cut graph partitioning algorithm along with multiple balancing constraints.

Fraud Detection graph partitioning

A Multimodal Deep Regression Bayesian Network for Affective Video Content Analyses

no code implementations ICCV 2017 Quan Gan, Shangfei Wang, Longfei Hao, Qiang Ji

After that, a joint representation is extracted from the top layers of the two deep networks, and thus captures the high order dependencies between visual modality and audio modality.

regression

First Step toward Model-Free, Anonymous Object Tracking with Recurrent Neural Networks

no code implementations19 Nov 2015 Quan Gan, Qipeng Guo, Zheng Zhang, Kyunghyun Cho

In this paper, we propose and study a novel visual object tracking approach based on convolutional networks and recurrent networks.

Object Visual Object Tracking +1

Cannot find the paper you are looking for? You can Submit a new open access paper.