Hierarchical Prototype Network for Continual Graph Representation Learning

NeurIPS 2021  ·  Xikun Zhang, Dongjin Song, DaCheng Tao ·

Despite significant advances in graph representation learning, little attention has been paid to graph data in which new categories of nodes (e.g., new research areas in citation networks or new types of products in co-purchasing networks) and their associated edges are continuously emerging. The key challenge is to incorporate the feature and topological information of new nodes in a continuous and effective manner such that performance over existing nodes is uninterrupted. To this end, we present Hierarchical Prototype Networks (HPNs) which can adaptively extract different levels of abstract knowledge in the form of prototypes to represent continually expanded graphs. Specifically, we first leverage a set of Atomic Feature Extractors (AFEs) to generate basic features which can encode both the elemental attribute information and the topological structure of the target node. Next, we develop HPNs by adaptively selecting relevant AFEs and represent each node with three-levels of prototypes, i.e., atomic-level, node-level, and class-level. In this way, whenever a new category of nodes is given, only the relevant AFEs and prototypes at each level will be activated and refined, while others remain uninterrupted. Finally, we provide the theoretical analysis on memory consumption bound and the continual learning capability of HPNs. Extensive empirical studies on eight different public datasets justify that HPNs are memory efficient and can achieve state-of-the-art performance on different continual graph representation learning tasks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here