Fast and Robust Matching for Multimodal Remote Sensing Image Registration

19 Aug 2018  ·  Yuanxin Ye, Lorenzo Bruzzone, Jie Shan, Francesca Bovolo, Qing Zhu ·

While image registration has been studied in remote sensing community for decades, registering multimodal data [e.g., optical, LiDAR, SAR, and map] remains a challenging problem because of significant nonlinear intensity differences between such data. To address this problem, this paper presents a fast and robust matching framework integrating local descriptors for multimodal registration. In the proposed framework, a local descriptor, such as Histogram of Oriented Gradient (HOG), Local Self Similarity (LSS), or Speeded-Up Robust Feature (SURF), is first extracted at each pixel to form a pixel-wise feature representation of an image. Then we define a similarity measure based on the feature representation in frequency domain using the 3 Dimensional Fast Fourier Transform (3DFFT) technique, followed by a template matching scheme to detect control points between images. In this procedure, we also propose a novel pixel-wise feature representation using orientated gradients of images, which is named channel features of orientated gradients (CFOG). This novel feature is an extension of the pixel-wise HOG descriptors, and outperforms that both in matching performance and computational efficiency. The major advantage of the proposed framework includes: (1) structural similarity representation using the pixel-wise feature description and (2) high computational efficiency due to the use of 3DFFT. Experimental results on different types of multimodal images show the superior matching performance of the proposed framework than the state-of-the-art methods.The proposed matching framework have been used in the software products of a Chinese listed company. The matlab code is available in this manuscript.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here