Pose Invariant Person Re-Identification using Robust Pose-transformation GAN

11 Apr 2021  ·  Arnab Karmakar, Deepak Mishra ·

The objective of person re-identification (re-ID) is to retrieve a person's images from an image gallery, given a single instance of the person of interest. Despite several advancements, learning discriminative identity-sensitive and viewpoint invariant features for robust Person Re-identification is a major challenge owing to the large pose variation of humans. This paper proposes a re-ID pipeline that utilizes the image generation capability of Generative Adversarial Networks combined with pose clustering and feature fusion to achieve pose invariant feature learning. The objective is to model a given person under different viewpoints and large pose changes and extract the most discriminative features from all the appearances. The pose transformational GAN (pt-GAN) module is trained to generate a person's image in any given pose. In order to identify the most significant poses for discriminative feature extraction, a Pose Clustering module is proposed. The given instance of the person is modelled in varying poses and these features are effectively combined through the Feature Fusion Network. The final re-ID model consisting of these 3 sub-blocks, alleviates the pose dependence in person re-ID. Also, The proposed model is robust to occlusion, scale, rotation and illumination, providing a framework for viewpoint invariant feature learning. The proposed method outperforms the state-of-the-art GAN based models in 4 benchmark datasets. It also surpasses the state-of-the-art models that report higher re-ID accuracy in terms of improvement over baseline.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here