A high-precision self-supervised monocular visual odometry in foggy weather based on robust cycled generative adversarial networks and multi-task learning aided depth estimation

9 Mar 2022  ·  Xiuyuan Li, Jiangang Yu, Fengchao Li, Guowen An ·

This paper proposes a high-precision self-supervised monocular VO, which is specifically designed for navigation in foggy weather. A cycled generative adversarial network is designed to obtain high-quality self-supervised loss via forcing the forward and backward half-cycle to output consistent estimation. Moreover, gradient-based loss and perceptual loss are introduced to eliminate the interference of complex photometric change on self-supervised loss in foggy weather. To solve the ill-posed problem of depth estimation, a self-supervised multi-task learning aided depth estimation module is designed based on the strong correlation between the depth estimation and transmission map calculation of hazy images in foggy weather. The experimental results on the synthetic foggy KITTI dataset show that the proposed self-supervised monocular VO performs better in depth and pose estimation than other state-of-the-art monocular VO in the literature, indicating the designed method is more suitable for foggy weather.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here