Incremental Learning on Growing Graphs

1 Jan 2021  ·  Yutong Feng, Jianwen Jiang, Yue Gao ·

Graphs have attracted numerous attention in varied areas and are dynamic in many scenarios. Among dynamic graphs, growing graphs with frequently expanding vertex and edge sets are typical and widely existed, e.g. the rapidly growing social networks. Confronting such growing data, existing methods on either static or dynamic graphs take the entire graph as a whole and may suffer from high computation cost and memory usage due to the continual growth of graphs. To tackle this problem, we introduce incremental graph learning (IGL), a general framework to formulate the learning on growing graphs in an incremental manner, where traditional graph learning method could be deployed as a basic model. We first analyze the problems of directly finetuning on the incremental part of graph, and theoretically discuss the unbiased and edge-preserved conditions of IGL. In our method, when the graph grows with new-coming data, we select or generate vertices and edges within restricted sizes from the previous graph to update current model together with the new data. Here, two strategies, i.e. sample-based and cluster-based, are proposed for learning with restricted time and space complexity. We conduct experiments on the node classification and link prediction tasks of multiple datasets. Experimental results and comparisons show that our method achieves satisfying performance with high efficiency on growing graphs.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here