no code implementations • 6 Apr 2024 • Tianle Pu, Changjun Fan, Mutian Shen, Yizhou Lu, Li Zeng, Zohar Nussinov, Chao Chen, Zhong Liu
The technique is originated from physics, but is very effective in enabling RL agents to explore to continuously improve the solutions during test.
no code implementations • 13 Dec 2023 • Wenjie Wu, Changjun Fan, Jincai Huang, Zhong Liu, Junchi Yan
To the best of our knowledge, this is the first systematic review of ML-related methods for BPP.
no code implementations • 9 Sep 2023 • Changan Liu, Changjun Fan, Zhongzhi Zhang
Maximizing influences in complex networks is a practically important but computationally challenging task for social network analysis, due to its NP- hard nature.
no code implementations • 16 Aug 2023 • Bingxu Zhang, Changjun Fan, Shixuan Liu, Kuihua Huang, Xiang Zhao, Jincai Huang, Zhong Liu
Graph neural networks (GNNs) are effective machine learning models for many graph-related applications.
no code implementations • 8 Jul 2023 • Shixuan Liu, Changjun Fan, Kewei Cheng, Yunfei Wang, Peng Cui, Yizhou Sun, Zhong Liu
Heterogeneous Information Networks (HINs) are information networks with multiple types of nodes and edges.
1 code implementation • 15 Dec 2020 • Cunchao Zhu, Muhao Chen, Changjun Fan, Guangquan Cheng, Yan Zhan
Since such temporal knowledge graphs often suffer from incompleteness, it is important to develop time-aware representation learning models that help to infer the missing temporal facts.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Xuelu Chen, Muhao Chen, Changjun Fan, Ankith Uppunda, Yizhou Sun, Carlo Zaniolo
Predicting missing facts in a knowledge graph (KG) is a crucial task in knowledge base construction and reasoning, and it has been the subject of much research in recent works using KG embeddings.
Ranked #2 on Knowledge Graph Completion on DPB-5L (French)
no code implementations • 31 May 2019 • Ziniu Hu, Changjun Fan, Ting Chen, Kai-Wei Chang, Yizhou Sun
With the proposed pre-training procedure, the generic structural information is learned and preserved, thus the pre-trained GNN requires less amount of labeled data and fewer domain-specific features to achieve high performance on different downstream tasks.
1 code implementation • 24 May 2019 • Changjun Fan, Li Zeng, Yuhui Ding, Muhao Chen, Yizhou Sun, Zhong Liu
By training on small-scale networks, the learned model is capable of assigning relative BC scores to nodes for any unseen networks, and thus identifying the highly-ranked nodes.