Search Results for author: Qitian Wu

Found 27 papers, 17 papers with code

Graph Out-of-Distribution Generalization via Causal Intervention

1 code implementation18 Feb 2024 Qitian Wu, Fan Nie, Chenxiao Yang, TianYi Bao, Junchi Yan

In this paper, we adopt a bottom-up data-generative perspective and reveal a key observation through causal analysis: the crux of GNNs' failure in OOD generalization lies in the latent confounding bias from the environment.

Causal Inference Out-of-Distribution Generalization

Rethinking Cross-Domain Sequential Recommendation under Open-World Assumptions

1 code implementation8 Nov 2023 Wujiang Xu, Qitian Wu, Runzhong Wang, Mingming Ha, Qiongxu Ma, Linxun Chen, Bing Han, Junchi Yan

To address these challenges under open-world assumptions, we design an \textbf{A}daptive \textbf{M}ulti-\textbf{I}nterest \textbf{D}ebiasing framework for cross-domain sequential recommendation (\textbf{AMID}), which consists of a multi-interest information module (\textbf{MIM}) and a doubly robust estimator (\textbf{DRE}).

Sequential Recommendation

Advective Diffusion Transformers for Topological Generalization in Graph Learning

no code implementations10 Oct 2023 Qitian Wu, Chenxiao Yang, Kaipeng Zeng, Fan Nie, Michael Bronstein, Junchi Yan

Graph diffusion equations are intimately related to graph neural networks (GNNs) and have recently attracted attention as a principled framework for analyzing GNN dynamics, formalizing their expressive power, and justifying architectural choices.

Graph Learning

How Graph Neural Networks Learn: Lessons from Training Dynamics

no code implementations8 Oct 2023 Chenxiao Yang, Qitian Wu, David Wipf, Ruoyu Sun, Junchi Yan

A long-standing goal in deep learning has been to characterize the learning behavior of black-box models in a more interpretable manner.

Inductive Bias

GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks

1 code implementation20 Jun 2023 Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan

Graph structure learning is a well-established problem that aims at optimizing graph structures adaptive to specific graph datasets to help message passing neural networks (i. e., GNNs) to yield effective and robust node embeddings.

Graph structure learning

NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification

1 code implementation14 Jun 2023 Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, Junchi Yan

In this paper, we introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes, as an important building block for a pioneering Transformer-style network for node classification on large graphs, dubbed as \textsc{NodeFormer}.

Graph structure learning Image Classification

Energy-based Out-of-Distribution Detection for Graph Neural Networks

1 code implementation6 Feb 2023 Qitian Wu, Yiting Chen, Chenxiao Yang, Junchi Yan

This paves a way for a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion

1 code implementation23 Jan 2023 Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf, Junchi Yan

Real-world data generation often involves complex inter-dependencies among instances, violating the IID-data hypothesis of standard learning paradigms and posing a challenge for uncovering the geometric structures for learning desired instance representations.

Image-text Classification Node Classification +2

Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs

1 code implementation18 Dec 2022 Chenxiao Yang, Qitian Wu, Jiahua Wang, Junchi Yan

Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes.

Representation Learning

Localized Contrastive Learning on Graphs

no code implementations8 Dec 2022 Hengrui Zhang, Qitian Wu, Yu Wang, Shaofeng Zhang, Junchi Yan, Philip S. Yu

Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data.

Contrastive Learning Data Augmentation +1

Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift

1 code implementation NeurIPS 2023 Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He

From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts.

Data Augmentation Graph Classification +2

Towards Out-of-Distribution Sequential Event Prediction: A Causal Treatment

1 code implementation24 Oct 2022 Chenxiao Yang, Qitian Wu, Qingsong Wen, Zhiqiang Zhou, Liang Sun, Junchi Yan

The goal of sequential event prediction is to estimate the next event based on a sequence of historical events, with applications to sequential recommendation, user behavior analysis and clinical treatment.

Sequential Recommendation Variational Inference

Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks

2 code implementations24 Oct 2022 Chenxiao Yang, Qitian Wu, Junchi Yan

We study a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs) by distilling knowledge from a teacher GNN model trained on a complete graph to a student GNN model operating on a smaller or sparser graph.

Knowledge Distillation Transfer Learning

Trading Hard Negatives and True Negatives: A Debiased Contrastive Collaborative Filtering Approach

no code implementations25 Apr 2022 Chenxiao Yang, Qitian Wu, Jipeng Jin, Xiaofeng Gao, Junwei Pan, Guihai Chen

To circumvent false negatives, we develop a principled approach to improve the reliability of negative instances and prove that the objective is an unbiased estimation of sampling from the true negative distribution.

Collaborative Filtering

Handling Distribution Shifts on Graphs: An Invariance Perspective

2 code implementations ICLR 2022 Qitian Wu, Hengrui Zhang, Junchi Yan, David Wipf

There is increasing evidence suggesting neural networks' sensitivity to distribution shifts, so that research on out-of-distribution (OOD) generalization comes into the spotlight.

valid

Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach

1 code implementation NeurIPS 2021 Qitian Wu, Chenxiao Yang, Junchi Yan

We target open-world feature extrapolation problem where the feature space of input data goes through expansion and a model trained on partially observed features needs to handle new features in test data without further retraining.

Graph Learning

ESCo: Towards Provably Effective and Scalable Contrastive Representation Learning

no code implementations29 Sep 2021 Hengrui Zhang, Qitian Wu, Shaofeng Zhang, Junchi Yan, David Wipf, Philip S. Yu

In this paper, we propose ESCo (Effective and Scalable Contrastive), a new contrastive framework which is essentially an instantiation of the Information Bottleneck principle under self-supervised learning settings.

Contrastive Learning Representation Learning +1

Inductive Collaborative Filtering via Relation Graph Learning

no code implementations1 Jan 2021 Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Hongyuan Zha

In this paper, we propose an inductive collaborative filtering framework that learns a hidden relational graph among users from the rating matrix.

Collaborative Filtering Graph Learning +2

Mutual Calibration between Explicit and Implicit Deep Generative Models

no code implementations1 Jan 2021 Qitian Wu, Rui Gao, Hongyuan Zha

Deep generative models are generally categorized into explicit models and implicit models.

Towards Open-World Recommendation: An Inductive Model-based Collaborative Filtering Approach

1 code implementation9 Jul 2020 Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Junchi Yan, Hongyuan Zha

The first model follows conventional matrix factorization which factorizes a group of key users' rating matrix to obtain meta latents.

Collaborative Filtering Matrix Completion +2

Bridging Explicit and Implicit Deep Generative Models via Neural Stein Estimators

no code implementations NeurIPS 2021 Qitian Wu, Rui Gao, Hongyuan Zha

To take full advantages of both models and enable mutual compensation, we propose a novel joint training framework that bridges an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy.

Stein Bridging: Enabling Mutual Reinforcement between Explicit and Implicit Generative Models

no code implementations25 Sep 2019 Qitian Wu, Rui Gao, Hongyuan Zha

Deep generative models are generally categorized into explicit models and implicit models.

Cannot find the paper you are looking for? You can Submit a new open access paper.