Semantic Segmentation of Panoramic Images Using a Synthetic Dataset

2 Sep 2019  ·  Yuanyou Xu, Kaiwei Wang, Kailun Yang, Dongming Sun, Jia Fu ·

Panoramic images have advantages in information capacity and scene stability due to their large field of view (FoV). In this paper, we propose a method to synthesize a new dataset of panoramic image. We managed to stitch the images taken from different directions into panoramic images, together with their labeled images, to yield the panoramic semantic segmentation dataset denominated as SYNTHIA-PANO. For the purpose of finding out the effect of using panoramic images as training dataset, we designed and performed a comprehensive set of experiments. Experimental results show that using panoramic images as training data is beneficial to the segmentation result. In addition, it has been shown that by using panoramic images with a 180 degree FoV as training data the model has better performance. Furthermore, the model trained with panoramic images also has a better capacity to resist the image distortion.

PDF Abstract

Datasets


Introduced in the Paper:

SYNTHIA-PANO

Used in the Paper:

Cityscapes SYNTHIA

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here