On the Expressiveness, Predictability and Interpretability of Neural Temporal Point Processes

29 Sep 2021  ·  Liangliang Shi, Fangyu Ding, Junchi Yan, Yanjie Duan, Guangjian Tian ·

Despite the fast advance in neural temporal point processes (NTPP) which enjoys high model capacity, there are still some standing gaps to fill including model expressiveness, predictability, and interpretability, especially with the wide application of event sequence modeling. For expressiveness, we first show the incapacity of existing NTPP models for fitting time-varying especially non-terminating TPP, and propose a simple neural model for expressive intensity function modeling. To improve predictability which is not directly optimized by the TPP likelihood objective, we devise our new sampling techniques that enable error metric driven adaptive fine-tuning of the sampling hyperparameter for predictive TPP, based on the event history in training sequences. Moreover, we show how interval-based event prediction can be achieved by our prediction techniques. To achieve interpretable NTPP, we propose an influence definition from one event to the future by comparing the difference between the existence of the event and not, which enables the dependency learning among events and types. Experimental results on synthetic datasets and public benchmarks show the efficacy of our approach.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here