Calibrating Cross-modal Features for Text-Based Person Searching

5 Apr 2023  ·  Donglai Wei, Sipeng Zhang, Tong Yang, Yang Liu, Jing Liu ·

Text-Based Person Searching (TBPS) aims to identify the images of pedestrian targets from a large-scale gallery with given textual caption. For cross-modal TBPS task, it is critical to obtain well-distributed representation in the common embedding space to reduce the inter-modal gap. Furthermore, it is also essential to learn detailed image-text correspondence efficiently to discriminate similar targets and enable fine-grained target search. To address these challenges, we present a simple yet effective method that calibrates cross-modal features from these two perspectives. Our method consists of two novel losses to provide fine-grained cross-modal features. The Sew calibration loss takes the quality of textual captions as guidance and aligns features between image and text modalities. On the other hand, the Masking Caption Modeling (MCM) loss leverages a masked captions prediction task to establish detailed and generic relationships between textual and visual parts. The proposed method is cost-effective and can easily retrieve specific persons with textual captions. The architecture has only a dual-encoder without multi-level branches or extra interaction modules, making a high-speed inference. Our method achieves top results on three popular benchmarks with 73.81%, 74.25% and 57.35% Rank1 accuracy on the CUHK-PEDES, ICFG-PEDES, and RSTPReID, respectively. We hope our scalable method will serve as a solid baseline and help ease future research in TBPS. The code will be publicly available.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here