3D Semantic Occupancy Prediction
4 papers with code • 0 benchmarks • 0 datasets
Uses sparse LiDAR semantic labels for training and testing
Benchmarks
These leaderboards are used to track progress in 3D Semantic Occupancy Prediction
Most implemented papers
OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction
The vision-based perception for autonomous driving has undergone a transformation from the bird-eye-view (BEV) representations to the 3D semantic occupancy.
PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic Occupancy Prediction
To address this, we propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively and a PointOcc model to process them efficiently.
InverseMatrixVT3D: An Efficient Projection Matrix-Based Approach for 3D Occupancy Prediction
Specifically, we achieve this by performing matrix multiplications between multi-view image feature maps and two sparse projection matrices.
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
Low-cost, vision-centric 3D perception systems for autonomous driving have made significant progress in recent years, narrowing the gap to expensive LiDAR-based methods.