Search Results for author: Yongduo Sui

Found 12 papers, 7 papers with code

Dynamic Sparse Learning: A Novel Paradigm for Efficient Recommendation

no code implementations5 Feb 2024 Shuyao Wang, Yongduo Sui, Jiancan Wu, Zhi Zheng, Hui Xiong

In the realm of deep learning-based recommendation systems, the increasing computational demands, driven by the growing number of users and items, pose a significant challenge to practical deployment.

Model Compression Recommendation Systems +1

Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness

no code implementations2 Feb 2024 Guibin Zhang, Yanwei Yue, Kun Wang, Junfeng Fang, Yongduo Sui, Kai Wang, Yuxuan Liang, Dawei Cheng, Shirui Pan, Tianlong Chen

Specifically, GST initially constructs a topology & semantic anchor at a low training cost, followed by performing dynamic sparse training to align the sparse graph with the anchor.

Adversarial Defense Graph Learning

GIF: A General Graph Unlearning Strategy via Influence Function

1 code implementation6 Apr 2023 Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, Xiangnan He

Then, we recognize the crux to the inability of traditional influence function for graph unlearning, and devise Graph Influence Function (GIF), a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $\epsilon$-mass perturbation in deleted data.

Machine Unlearning

Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift

1 code implementation NeurIPS 2023 Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He

From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts.

Data Augmentation Graph Classification +2

Causal Attention for Interpretable and Generalizable Graph Classification

1 code implementation30 Dec 2021 Yongduo Sui, Xiang Wang, Jiancan Wu, Min Lin, Xiangnan He, Tat-Seng Chua

To endow the classifier with better interpretation and generalization, we propose the Causal Attention Learning (CAL) strategy, which discovers the causal patterns and mitigates the confounding effect of shortcuts.

Graph Attention Graph Classification

Inductive Lottery Ticket Learning for Graph Neural Networks

no code implementations29 Sep 2021 Yongduo Sui, Xiang Wang, Tianlong Chen, Xiangnan He, Tat-Seng Chua

In this work, we propose a simple and effective learning paradigm, Inductive Co-Pruning of GNNs (ICPG), to endow graph lottery tickets with inductive pruning capacity.

Graph Classification Node Classification +1

Exploring Lottery Ticket Hypothesis in Media Recommender Systems

1 code implementation2 Aug 2021 Yanfang Wang, Yongduo Sui, Xiang Wang, Zhenguang Liu, Xiangnan He

We get inspirations from the recently proposed lottery ticket hypothesis (LTH), which argues that the dense and over-parameterized model contains a much smaller and sparser sub-model that can reach comparable performance to the full model.

Recommendation Systems Representation Learning

GANs Can Play Lottery Tickets Too

1 code implementation ICLR 2021 Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen

In this work, we for the first time study the existence of such trainable matching subnetworks in deep GANs.

Image-to-Image Translation

A Unified Lottery Ticket Hypothesis for Graph Neural Networks

2 code implementations12 Feb 2021 Tianlong Chen, Yongduo Sui, Xuxi Chen, Aston Zhang, Zhangyang Wang

With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training and inference of GNNs become increasingly expensive.

Link Prediction Node Classification

Graph Contrastive Learning with Augmentations

4 code implementations NeurIPS 2020 Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang shen

In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.

Contrastive Learning Representation Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.