Bird's-Eye View Semantic Segmentation
14 papers with code • 2 benchmarks • 2 datasets
Most implemented papers
ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning
In particular, we propose a spatial-temporal feature learning scheme towards a set of more representative features for perception, prediction and planning tasks simultaneously, which is called ST-P3.
Model-Based Imitation Learning for Urban Driving
Our approach is the first camera-only method that models static scene, dynamic scene, and ego-behaviour in an urban driving environment.
Semi-Supervised Learning for Visual Bird's Eye View Semantic Segmentation
In this paper, we present a novel semi-supervised framework for visual BEV semantic segmentation to boost performance by exploiting unlabeled images during the training.
PointBeV: A Sparse Approach to BeV Predictions
To address this, we propose PointBeV, a novel sparse BeV segmentation model operating on sparse BeV cells instead of dense grids.