Monocular Semantic Occupancy Grid Mapping with Convolutional Variational Encoder-Decoder Networks

6 Apr 2018  ·  Chenyang Lu, Marinus Jacobus Gerardus van de Molengraft, Gijs Dubbelman ·

In this work, we research and evaluate end-to-end learning of monocular semantic-metric occupancy grid mapping from weak binocular ground truth. The network learns to predict four classes, as well as a camera to bird's eye view mapping. At the core, it utilizes a variational encoder-decoder network that encodes the front-view visual information of the driving scene and subsequently decodes it into a 2-D top-view Cartesian coordinate system. The evaluations on Cityscapes show that the end-to-end learning of semantic-metric occupancy grids outperforms the deterministic mapping approach with flat-plane assumption by more than 12% mean IoU. Furthermore, we show that the variational sampling with a relatively small embedding vector brings robustness against vehicle dynamic perturbations, and generalizability for unseen KITTI data. Our network achieves real-time inference rates of approx. 35 Hz for an input image with a resolution of 256x512 pixels and an output map with 64x64 occupancy grid cells using a Titan V GPU.

PDF Abstract

Results from the Paper


Ranked #2 on Bird's-Eye View Semantic Segmentation on nuScenes (IoU veh - 224x480 - No vis filter - 100x50 at 0.25 metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Bird's-Eye View Semantic Segmentation nuScenes VED IoU veh - 224x480 - No vis filter - 100x50 at 0.25 8.8 # 2

Methods


No methods listed for this paper. Add relevant methods here