A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird's Eye View

8 May 2020  ·  Lennart Reiher, Bastian Lampe, Lutz Eckstein ·

Accurate environment perception is essential for automated driving. When using monocular cameras, the distance estimation of elements in the environment poses a major challenge. Distances can be more easily estimated when the camera perspective is transformed to a bird's eye view (BEV). For flat surfaces, Inverse Perspective Mapping (IPM) can accurately transform images to a BEV. Three-dimensional objects such as vehicles and vulnerable road users are distorted by this transformation making it difficult to estimate their position relative to the sensor. This paper describes a methodology to obtain a corrected 360{\deg} BEV image given images from multiple vehicle-mounted cameras. The corrected BEV image is segmented into semantic classes and includes a prediction of occluded areas. The neural network approach does not rely on manually labeled data, but is trained on a synthetic dataset in such a way that it generalizes well to real-world data. By using semantically segmented images as input, we reduce the reality gap between simulated and real-world data and are able to show that our method can be successfully applied in the real world. Extensive experiments conducted on the synthetic data demonstrate the superiority of our approach compared to IPM. Source code and datasets are available at https://github.com/ika-rwth-aachen/Cam2BEV

PDF Abstract

Datasets


Introduced in the Paper:

Cam2BEV

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Semantic Segmentation Cam2BEV uNetXST Mean IoU 71.92 # 1
Cross-View Image-to-Image Translation Cam2BEV uNetXST Mean IoU 71.92 # 1

Methods