1 code implementation • 26 Jan 2024 • Shibbir Ahmed, Hongyang Gao, Hridesh Rajan
In this work, we propose a novel technique that uses rules derived from neural network computations to infer data preconditions for a DNN model to determine the trustworthiness of its predictions.
1 code implementation • 24 Dec 2023 • Zhaoning Yu, Hongyang Gao
In this paper, we develop a data-driven motif extraction technique known as MotifPiece, which employs statistical measures to define motifs.
1 code implementation • 15 Dec 2022 • Benjamin Steenhoek, Hongyang Gao, Wei Le
In this paper, we propose to combine such causal-based vulnerability detection algorithms with deep learning, aiming to achieve more efficient and effective vulnerability detection.
no code implementations • 30 Sep 2022 • Tianxiang Gao, Hongyang Gao
We show that global convergence is guaranteed, even if only the implicit layer is trained.
no code implementations • 16 May 2022 • Tianxiang Gao, Hongyang Gao
Implicit deep learning has recently become popular in the machine learning community since these implicit models can achieve competitive performance with state-of-the-art deep networks while using significantly less memory and computational resources.
1 code implementation • 1 Feb 2022 • Zhaoning Yu, Hongyang Gao
We propose a novel molecular graph representation learning method by constructing a heterogeneous motif graph to address this issue.
no code implementations • 1 Feb 2022 • Zhaoning Yu, Hongyang Gao
Most existing GNN explanation methods identify the most important edges or nodes but fail to consider substructures, which are more important for graph data.
no code implementations • ICLR 2022 • Tianxiang Gao, Hailiang Liu, Jia Liu, Hridesh Rajan, Hongyang Gao
Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rules of many commonly used neural network architectures.
no code implementations • 15 Mar 2021 • Hongyang Gao, Yi Liu, Xuan Zhang, Shuiwang Ji
We study text representation methods using deep models.
no code implementations • 1 Jan 2021 • Hongyang Gao, Shuiwang Ji
Line graphs have shown to be effective in improving feature learning in graph neural networks.
no code implementations • 1 Jan 2021 • Hongyang Gao, Shuiwang Ji
To address these limitations, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to enable each node to aggregate information from a much larger neighborhood.
no code implementations • 19 Oct 2020 • Hongyang Gao, Yi Liu, Shuiwang Ji
In addition, graph topology is incorporated in global voting to compute the importance score of each node globally in the entire graph.
3 code implementations • 18 Jul 2020 • Meng Liu, Hongyang Gao, Shuiwang Ji
Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
Ranked #2 on Node Classification on AMZ Computers
1 code implementation • ICLR 2020 • Hongyang Gao, Zhengyang Wang, Shuiwang Ji
Use of attention operators on high-order data requires flattening of the spatial or spatial-temporal dimensions into a vector, which is assumed to follow a multivariate normal distribution.
no code implementations • 25 Sep 2019 • Hongyang Gao, Yaochen Xie, Shuiwang Ji
This results in the Siamese attention operator (SAO).
no code implementations • 25 Sep 2019 • Hongyang Gao, Shuiwang Ji
Previous studies used global ranking methods to sample some of the important nodes, but most of them are not able to incorporate graph topology information in computing ranking scores.
1 code implementation • 5 Jul 2019 • Hongyang Gao, Shuiwang Ji
To further reduce the requirements on computational resources, we propose the cGAO that performs attention operations along channels.
Ranked #8 on Graph Classification on D&D (using extra training data)
3 code implementations • 11 May 2019 • Hongyang Gao, Shuiwang Ji
We further propose the gUnpool layer as the inverse operation of the gPool layer.
Ranked #4 on Graph Classification on D&D
1 code implementation • 21 Jan 2019 • Hongyang Gao, Yongjun Chen, Shuiwang Ji
Another limitation of GCN when used on graph-based text representation tasks is that, GCNs do not consider the order information of nodes in graph.
no code implementations • 27 Sep 2018 • Hongyang Gao, Shuiwang Ji
We further propose the gUnpool layer as the inverse operation of the gPool layer.
2 code implementations • NeurIPS 2018 • Hongyang Gao, Zhengyang Wang, Shuiwang Ji
Compared to prior CNNs designed for mobile devices, ChannelNets achieve a significant reduction in terms of the number of parameters and computational cost without loss in accuracy.
1 code implementation • 12 Aug 2018 • Hongyang Gao, Zhengyang Wang, Shuiwang Ji
However, the number of neighboring units is neither fixed nor are they ordered in generic graphs, thereby hindering the applications of convolutional operations.
Ranked #2 on Document Classification on Cora
no code implementations • 24 Nov 2017 • Hongyang Gao, Shuiwang Ji
In this paper, we propose a set of methods based on kernel rotation and flip to enable rotation and flip invariance in convolutional neural networks.
1 code implementation • 19 May 2017 • Lei Cai, Hongyang Gao, Shuiwang Ji
In the simplest case, the proposed multi-stage VAE divides the decoder into two components in which the second component generates refined images based on the course images generated by the first component.
4 code implementations • ICLR 2018 • Hongyang Gao, Hao Yuan, Zhengyang Wang, Shuiwang Ji
When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations.