3D Semantic Segmentation

169 papers with code • 14 benchmarks • 31 datasets

3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Libraries

Use these libraries to find 3D Semantic Segmentation models and implementations
12 papers
1,136
5 papers
274
3 papers
1,673
See all 7 libraries.

FRNet: Frustum-Range Networks for Scalable LiDAR Segmentation

ldkong1205/Robo3D 7 Dec 2023

LiDAR segmentation has become a crucial component in advanced autonomous driving systems.

274
07 Dec 2023

OneFormer3D: One Transformer for Unified Point Cloud Segmentation

oneformer3d/oneformer3d 24 Nov 2023

Semantic, instance, and panoptic segmentation of 3D point clouds have been addressed using task-specific models of distinct design.

202
24 Nov 2023

GNeSF: Generalizable Neural Semantic Fields

hlinchen/gnesf NeurIPS 2023

We propose a novel soft voting mechanism to aggregate the 2D semantic information from different views for each 3D point.

33
24 Oct 2023

Vision Transformers increase efficiency of 3D cardiac CT multi-label segmentation

ljollans/trunet 13 Oct 2023

Accurate segmentation of the heart is essential for personalized blood flow simulations and surgical intervention planning.

3
13 Oct 2023

PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm

OpenGVLab/PonderV2 12 Oct 2023

In this paper, we introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation, thereby establishing a pathway to 3D foundational models.

298
12 Oct 2023

UniPAD: A Universal Pre-training Paradigm for Autonomous Driving

Nightmare-n/UniPAD 12 Oct 2023

In the context of autonomous driving, the significance of effective feature learning is widely acknowledged.

125
12 Oct 2023

Towards Robust Robot 3D Perception in Urban Environments: The UT Campus Object Dataset

ut-amrl/coda-devkit 24 Sep 2023

Using our dataset and annotations, we release benchmarks for 3D object detection and 3D semantic segmentation using established metrics.

13
24 Sep 2023

MoPA: Multi-Modal Prior Aided Domain Adaptation for 3D Semantic Segmentation

aroncao49/mopa 21 Sep 2023

In this work, we propose Multi-modal Prior Aided (MoPA) domain adaptation to improve the performance of rare objects.

12
21 Sep 2023

T-UDA: Temporal Unsupervised Domain Adaptation in Sequential Point Clouds

ctu-vras/t-uda 15 Sep 2023

Deep perception models have to reliably cope with an open-world setting of domain shifts induced by different geographic regions, sensor properties, mounting positions, and several other reasons.

2
15 Sep 2023

UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase

pjlab-adg/pcseg ICCV 2023

Besides, we construct the OpenPCSeg codebase, which is the largest and most comprehensive outdoor LiDAR segmentation codebase.

296
11 Sep 2023