A Prior Instruction Representation Framework for Remote Sensing Image-text Retrieval

ACMMM 2023  ·  Jiancheng Pan, Qing Ma, Cong Bai ·

This paper presents a prior instruction representation framework (PIR) for remote sensing image-text retrieval, aimed at remote sensing vision-language understanding tasks to solve the semantic noise problem. Our highlight is the proposal of a paradigm that draws on prior knowledge to instruct adaptive learning of vision and text representations. Concretely, two progressive attention encoder (PAE) structures, Spatial-PAE and Temporal-PAE, are proposed to perform long-range dependency modeling to enhance key feature representation. In vision representation, Vision Instruction Representation (VIR) based on Spatial-PAE exploits the prior-guided knowledge of the remote sensing scene recognition by building a belief matrix to select key features for reducing the impact of semantic noise. In text representation, Language Cycle Attention (LCA) based on Temporal-PAE uses the previous time step to cyclically activate the current time step to enhance text representation capability. A cluster-wise affiliation loss is proposed to constrain the inter-classes and to reduce the semantic confusion zones in the common subspace. Comprehensive experiments demonstrate that using prior knowledge instruction could enhance vision and text representations and could outperform the state-of-the-art methods on two benchmark datasets, RSICD and RSITMD.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Cross-Modal Retrieval RSICD PIR Mean Recall 24.46% # 4
Image-to-text R@1 9.88% # 4
text-to-image R@1 6.97% # 4
Cross-Modal Retrieval RSITMD PIR Mean Recall 38.24% # 4
Image-to-text R@1 18.14% # 4
text-to-imageR@1 12.17% # 5

Methods


No methods listed for this paper. Add relevant methods here