TinyPredNet: A Lightweight Framework for Satellite Image Sequence Prediction

Satellite image sequence prediction aims to precisely infer future satellite image frames with historical observations, which is a significant and challenging dense prediction task. Though existing deep learning models deliver promising performance for satellite image sequence prediction, the methods suffer from quite expensive training costs, especially in training time and GPU memory demand, due to the inefficiently modeling for temporal variations. This issue seriously limits the lightweight application in satellites such as space-borne forecast models. In this article, we propose a lightweight prediction framework TinyPredNet for satellite image sequence prediction, in which a spatial encoder and decoder model the intra-frame appearance features and a temporal translator captures inter-frame motion patterns. To efficiently model the temporal evolution of satellite image sequences, we carefully design a multi-scale temporal-cascaded structure and a channel attention-gated structure in the temporal translator. Comprehensive experiments are conducted on FengYun-4A (FY-4A) satellite dataset, which show that the proposed framework achieves very competitive performance with much lower computation cost compared to state-of-the-art methods. In addition, corresponding interpretability experiments are conducted to show how our designed structures work. We believe the proposed method can serve as a solid lightweight baseline for satellite image sequence prediction.

PDF 2024 2024 PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here