3D Semantic Segmentation
168 papers with code • 14 benchmarks • 31 datasets
3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.
Libraries
Use these libraries to find 3D Semantic Segmentation models and implementationsDatasets
Subtasks
Latest papers with no code
RESSCAL3D: Resolution Scalable 3D Semantic Segmentation of Point Clouds
To the best of our knowledge, the proposed method is the first to propose a resolution-scalable approach for 3D semantic segmentation of point clouds based on deep learning.
Hierarchical Insights: Exploiting Structural Similarities for Reliable 3D Semantic Segmentation
Safety-critical applications like autonomous driving call for robust 3D environment perception algorithms which can withstand highly diverse and ambiguous surroundings.
TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models
Given access to paired image-pointcloud (2D-3D) data, we first optimize a 3D segmentation backbone for the main task of semantic segmentation using the pointclouds and the task of 2D $\to$ 3D KD by using an off-the-shelf 2D pre-trained foundation model.
Real-time 3D semantic occupancy prediction for autonomous vehicles using memory-efficient sparse convolution
In autonomous vehicles, understanding the surrounding 3D environment of the ego vehicle in real-time is essential.
AVS-Net: Point Sampling with Adaptive Voxel Size for 3D Scene Understanding
For such purpose, this paper presents an advanced sampler that achieves both high accuracy and efficiency.
Is Continual Learning Ready for Real-world Challenges?
Our paper aims to initiate a paradigm shift, advocating for the adoption of continual learning methods through new experimental protocols that better emulate real-world conditions to facilitate breakthroughs in the field.
SGS-SLAM: Semantic Gaussian Splatting For Neural Dense SLAM
We present SGS-SLAM, the first semantic visual SLAM system based on Gaussian Splatting.
Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration
First, we propose the learnable transformation alignment to bridge the domain gap between image and point cloud data, converting features into a unified representation space for effective comparison and matching.
POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images
We describe an approach to predict open-vocabulary 3D semantic voxel occupancy map from input 2D images with the objective of enabling 3D grounding, segmentation and retrieval of free-form language queries.
WildScenes: A Benchmark for 2D and 3D Semantic Segmentation in Large-scale Natural Environments
Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and lidar) datasets in urban environments.