3D Semantic Segmentation

169 papers with code • 14 benchmarks • 31 datasets

3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Libraries

Use these libraries to find 3D Semantic Segmentation models and implementations
12 papers
1,138
5 papers
274
3 papers
1,677
See all 7 libraries.

Most implemented papers

SalsaNet: Fast Road and Vehicle Segmentation in LiDAR Point Clouds for Autonomous Driving

aksoyeren/salsanet 18 Sep 2019

SalsaNet segments the road, i. e. drivable free-space, and vehicles in the scene by employing the Bird-Eye-View (BEV) image projection of the point cloud.

SqueezeSegV3: Spatially-Adaptive Convolution for Efficient Point-Cloud Segmentation

chenfengxu714/SqueezeSegV3 ECCV 2020

Using standard convolutions to process such LiDAR images is problematic, as convolution filters pick up local features that are only active in specific regions in the image.

Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmentation

xinge008/Cylinder3D 4 Aug 2020

A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.

RELLIS-3D Dataset: Data, Benchmarks and Analysis

unmannedlab/RELLIS-3D 17 Nov 2020

The data was collected on the Rellis Campus of Texas A\&M University and presents challenges to existing algorithms related to class imbalance and environmental topography.

Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation

valeoai/xmuda 18 Jan 2021

Domain adaptation is an important task to enable learning when labels are scarce.

Mix3D: Out-of-Context Data Augmentation for 3D Scenes

kumuji/mix3d 5 Oct 2021

Since scene context helps reasoning about object semantics, current works focus on models with large capacity and receptive fields that can fully capture the global context of an input 3D scene.

Scribble-Supervised LiDAR Semantic Segmentation

ouenal/scribblekitti CVPR 2022

Densely annotating LiDAR point clouds remains too expensive and time-consuming to keep up with the ever growing volume of data.

PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies

guochengqian/pointnext 9 Jun 2022

In this work, we revisit the classical PointNet++ through a systematic study of model training and scaling strategies, and offer two major contributions.

OctFormer: Octree-based Transformers for 3D Point Clouds

octree-nn/octformer 4 May 2023

To combat this issue, several works divide point clouds into non-overlapping windows and constrain attentions in each local window.

Point Transformer V3: Simpler, Faster, Stronger

Pointcept/Pointcept 15 Dec 2023

This paper is not motivated to seek innovation within the attention mechanism.