Search Results for author: Hanchen Wang

Found 24 papers, 11 papers with code

STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space Model

no code implementations19 Mar 2024 Lincan Li, Hanchen Wang, Wenjie Zhang, Adelle Coster

In this work, we introduce Spatial-Temporal Graph Mamba (STG-Mamba) as the first exploration of leveraging the powerful selective state space models for STG learning by treating STG Network as a system, and employing the Graph Selective State Space Block (GS3B) to precisely characterize the dynamic evolution of STG networks.

Computational Efficiency Graph Learning

A Physics-guided Generative AI Toolkit for Geophysical Monitoring

no code implementations6 Jan 2024 Junhuan Yang, Hanchen Wang, Yi Sheng, Youzuo Lin, Lei Yang

Full-waveform inversion (FWI) plays a vital role in geoscience to explore the subsurface.

SSIM

An Empirical Study of Large-Scale Data-Driven Full Waveform Inversion

no code implementations28 Jul 2023 Peng Jin, Yinan Feng, Shihang Feng, Hanchen Wang, Yinpeng Chen, Benjamin Consolvo, Zicheng Liu, Youzuo Lin

This paper investigates the impact of big data on deep learning models to help solve the full waveform inversion (FWI) problem.

Denoising Variational Graph of Graphs Auto-Encoder for Predicting Structured Entity Interactions

1 code implementation IEEE Transactions on Knowledge and Data Engineering 2023 PDF Han Chen, Hanchen Wang, Hongmei Chen, Ying Zhang, Wenjie Zhang, Xuemin Lin

The interactions between structured entities play important roles in a wide range of applications such as chemistry, material science, biology, and medical science.

Denoising

Solving Seismic Wave Equations on Variable Velocity Models with Fourier Neural Operator

no code implementations25 Sep 2022 Bian Li, Hanchen Wang, Xiu Yang, Youzuo Lin

Previous works that concentrate on solving the wave equation by neural networks consider either a single velocity model or multiple simple velocity models, which is restricted in practice.

Computational Efficiency Operator learning +1

Evaluating Self-Supervised Learning for Molecular Graph Embeddings

1 code implementation NeurIPS 2023 Hanchen Wang, Jean Kaddour, Shengchao Liu, Jian Tang, Joan Lasenby, Qi Liu

Graph Self-Supervised Learning (GSSL) provides a robust pathway for acquiring embeddings without expert labelling, a capability that carries profound implications for molecular graphs due to the staggering number of potential molecules and the high cost of obtaining labels.

Self-Supervised Learning

Reinforcement Learning Based Query Vertex Ordering Model for Subgraph Matching

no code implementations25 Jan 2022 Hanchen Wang, Ying Zhang, Lu Qin, Wei Wang, Wenjie Zhang, Xuemin Lin

In recent years, many advanced techniques for query vertex ordering (i. e., matching order generation) have been proposed to reduce the unpromising intermediate results according to the preset heuristic rules.

reinforcement-learning Reinforcement Learning (RL)

OpenFWI: Large-Scale Multi-Structural Benchmark Datasets for Seismic Full Waveform Inversion

2 code implementations4 Nov 2021 Chengyuan Deng, Shihang Feng, Hanchen Wang, Xitong Zhang, Peng Jin, Yinan Feng, Qili Zeng, Yinpeng Chen, Youzuo Lin

The recent success of data-driven FWI methods results in a rapidly increasing demand for open datasets to serve the geophysics community.

2k Benchmarking +2

Iterative Teaching by Label Synthesis

no code implementations NeurIPS 2021 Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Schölkopf, Adrian Weller

In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner.

Pre-training Molecular Graph Representation with 3D Geometry

1 code implementation ICLR 2022 Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, Jian Tang

However, the lack of 3D information in real-world scenarios has significantly impeded the learning of geometric graph representation.

Graph Representation Learning Self-Supervised Learning

MLReal: Bridging the gap between training on synthetic data and real data applications in machine learning

no code implementations11 Sep 2021 Tariq Alkhalifah, Hanchen Wang, Oleg Ovcharenko

This is accomplished by applying two operations on the input data to the NN model: 1) The crosscorrelation of the input data (i. e., shot gather, seismic image, etc.)

Domain Adaptation

Matching Point Sets with Quantum Circuit Learning

no code implementations12 Feb 2021 Mohammadreza Noormandipour, Hanchen Wang

In this work, we propose a parameterised quantum circuit learning approach to point set matching problem.

set matching

Unsupervised Point Cloud Pre-Training via Occlusion Completion

1 code implementation ICCV 2021 Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, Matthew J. Kusner

We find that even when we construct a single pre-training dataset (from ModelNet40), this pre-training method improves accuracy across different datasets and encoders, on a wide range of downstream tasks.

3D Point Cloud Linear Classification Few-Shot 3D Point Cloud Classification +5

Pre-Training by Completing Point Clouds

no code implementations28 Sep 2020 Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, Matt Kusner

There has recently been a flurry of exciting advances in deep learning models on point clouds.

GoGNN: Graph of Graphs Neural Network for Predicting Structured Entity Interactions

1 code implementation12 May 2020 Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xuemin Lin

We observe that existing works on structured entity interaction prediction cannot properly exploit the unique graph of graphs model.

Binarized Graph Neural Network

no code implementations19 Apr 2020 Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xiangjian He, Yiguang Lin, Xuemin Lin

Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches to binarize the model parameters and learn the compact embedding.

Graph Embedding

Neural Random Subspace

1 code implementation18 Nov 2019 Yun-Hao Cao, Jianxin Wu, Hanchen Wang, Joan Lasenby

The random subspace method, known as the pillar of random forests, is good at making precise and robust predictions.

Representation Learning

An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision

1 code implementation22 Oct 2019 Hanchen Wang, Nina Grgic-Hlaca, Preethi Lahoti, Krishna P. Gummadi, Adrian Weller

We do not provide a way to directly learn a similarity metric satisfying the individual fairness, but to provide an empirical study on how to derive the similarity metric from human supervisors, then future work can use this as a tool to understand human supervision.

Fairness Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.