MoDeRNN: Towards Fine-grained Motion Details for Spatiotemporal Predictive Learning

25 Oct 2021  ·  Zenghao Chai, Zhengzhuo Xu, Chun Yuan ·

Spatiotemporal predictive learning (ST-PL) aims at predicting the subsequent frames via limited observed sequences, and it has broad applications in the real world. However, learning representative spatiotemporal features for prediction is challenging. Moreover, chaotic uncertainty among consecutive frames exacerbates the difficulty in long-term prediction. This paper concentrates on improving prediction quality by enhancing the correspondence between the previous context and the current state. We carefully design Detail Context Block (DCB) to extract fine-grained details and improve the isolated correlation between upper context state and current input state. We integrate DCB with standard ConvLSTM and introduce Motion Details RNN (MoDeRNN) to capture fine-grained spatiotemporal features and improve the expression of latent states of RNNs to achieve significant quality. Experiments on Moving MNIST and Typhoon datasets demonstrate the effectiveness of the proposed method. MoDeRNN outperforms existing state-of-the-art techniques qualitatively and quantitatively with lower computation loads.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods