no code implementations • 26 Mar 2024 • Haoran Liu, Mingzhe Liu, Peng Li, Jiahui Wu, Xin Jiang, Zhuo Zuo, Bingqi Liu
This process randomly closes some neural connections in the RCNN model, realized by the random inactivation weight matrix of link input.
no code implementations • 9 Mar 2024 • Rui Yang, Haoran Liu, Edison Marrese-Taylor, Qingcheng Zeng, Yu He Ke, Wanxin Li, Lechao Cheng, Qingyu Chen, James Caverlee, Yutaka Matsuo, Irene Li
Large Language Models (LLMs) have significantly advanced healthcare innovation on generation capabilities.
no code implementations • 29 Aug 2023 • Haoran Liu, Bokun Wang, Jianling Wang, Xiangjue Dong, Tianbao Yang, James Caverlee
As powerful tools for representation learning on graphs, graph neural networks (GNNs) have played an important role in applications including social networks, recommendation systems, and online web services.
1 code implementation • 26 May 2023 • Haoran Liu, Peng Li, Ming-Zhe Liu, Kai-Ming Wang, Zhuo Zuo, Bing-Qi Liu
This study introduces the Tempotron, a powerful classifier based on a third-generation neural network model, for pulse shape discrimination.
no code implementations • 24 May 2023 • Kaimin Wang, Haoran Liu, Peng Li, Mingzhe Liu, Zhuo Zuo
In addition to the pulse signals, this dataset includes the source code for all the aforementioned pulse shape discrimination methods.
1 code implementation • CVPR 2023 • Yulin Liu, Haoran Liu, Yingda Yin, Yang Wang, Baoquan Chen, He Wang
Normalizing flows (NFs) provide a powerful tool to construct an expressive distribution by a sequence of trackable transformations of a base distribution and form a probabilistic model of underlying data.
1 code implementation • CVPR 2023 • Yinzhen Xu, Weikang Wan, Jialiang Zhang, Haoran Liu, Zikang Shan, Hao Shen, Ruicheng Wang, Haoran Geng, Yijia Weng, Jiayi Chen, Tengyu Liu, Li Yi, He Wang
Trained on our synthesized large-scale dexterous grasp dataset, this model enables us to sample diverse and high-quality dexterous grasp poses for the object point cloud. For the second stage, we propose to replace the motion planning used in parallel gripper grasping with a goal-conditioned grasp policy, due to the complexity involved in dexterous grasping execution.
1 code implementation • 2 Dec 2022 • Maowei Jiang, Pengyu Zeng, Kai Wang, Huan Liu, Wenbo Chen, Haoran Liu
However, use of FT is problematic due to Gibbs phenomenon.
1 code implementation • 11 Oct 2022 • Meng Liu, Haoran Liu, Shuiwang Ji
the discrete data space to approximately construct the provably optimal proposal distribution, which is subsequently used by importance sampling to efficiently estimate the original ratio matching objective.
1 code implementation • 26 Jul 2022 • Limei Wang, Haoran Liu, Yi Liu, Jerry Kurtin, Shuiwang Ji
In this work, we propose to develop a novel hierarchical graph network, known as ProNet, to capture the relations.
1 code implementation • 17 Jun 2022 • Limei Wang, Yi Liu, Yuchao Lin, Haoran Liu, Shuiwang Ji
To incorporate 3D information completely and efficiently, we propose a novel message passing scheme that operates within 1-hop neighborhood.
Ranked #4 on Drug Discovery on QM9
no code implementations • 6 Oct 2021 • Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou
Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.
1 code implementation • 29 Sep 2021 • Meng Liu, Haoran Liu, Shuiwang Ji
In this study, we propose ratio matching with gradient-guided importance sampling (RMwGGIS) to alleviate the above limitations.
1 code implementation • 23 Mar 2021 • Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, Keqiang Yan, Haoran Liu, Cong Fu, Bora Oztekin, Xuan Zhang, Shuiwang Ji
Although there exist several libraries for deep learning on graphs, they are aiming at implementing basic operations for graph deep learning.
no code implementations • 1 Jan 2021 • Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou
While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.
1 code implementation • 21 Dec 2018 • Cheng Yang, Maosong Sun, Haoran Liu, Shiyi Han, Zhiyuan Liu, Huanbo Luan
The strong assumptions oversimplify the complex diffusion mechanism and prevent these models from better fitting real-world cascade data.
Social and Information Networks Physics and Society
no code implementations • 11 Oct 2018 • Fei Tan, Zhi Wei, Jun He, Xiang Wu, Bo Peng, Haoran Liu, Zhenyu Yan
In this work, we focus on pre- dicting attrition, which is one of typical user intended actions.