Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications

21 Mar 2024  ·  Francisco Mena, Diego Arenas, Marcela Charfuelan, Marlon Nuske, Andreas Dengel ·

Earth observation (EO) applications involving complex and heterogeneous data sources are commonly approached with machine learning models. However, there is a common assumption that data sources will be persistently available. Different situations could affect the availability of EO sources, like noise, clouds, or satellite mission failures. In this work, we assess the impact of missing temporal and static EO sources in trained models across four datasets with classification and regression tasks. We compare the predictive quality of different methods and find that some are naturally more robust to missing data. The Ensemble strategy, in particular, achieves a prediction robustness up to 100%. We evidence that missing scenarios are significantly more challenging in regression than classification tasks. Finally, we find that the optical view is the most critical view when it is missing individually.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Crop Classification CropHarvest - Global Ensemble strategy Average Accuracy 0.828 # 3
Crop Classification CropHarvest - Global Feature Gated Fusion Average Accuracy 0.849 # 1
Crop Classification CropHarvest - Global Input Fusion Average Accuracy 0.847 # 2
Crop Classification CropHarvest multicrop - Global Input Fusion Average Accuracy 0.738 # 1
Crop Classification CropHarvest multicrop - Global Ensemble strategy Average Accuracy 0.715 # 3
Crop Classification CropHarvest multicrop - Global Feature Gated Fusion Average Accuracy 0.734 # 2

Methods


No methods listed for this paper. Add relevant methods here