RELLIS-3D Dataset: Data, Benchmarks and Analysis

17 Nov 2020  ยท  Peng Jiang, Philip Osteen, Maggie Wigness, Srikanth Saripalli ยท

Semantic scene understanding is crucial for robust and safe autonomous navigation, particularly so in off-road environments. Recent deep learning advances for 3D semantic segmentation rely heavily on large sets of training data, however existing autonomy datasets either represent urban environments or lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal dataset collected in an off-road environment, which contains annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A\&M University and presents challenges to existing algorithms related to class imbalance and environmental topography. Additionally, we evaluate the current state-of-the-art deep learning semantic segmentation models on this dataset. Experimental results show that RELLIS-3D presents challenges for algorithms designed for segmentation in urban environments. This novel dataset provides the resources needed by researchers to continue to develop more advanced algorithms and investigate new research directions to enhance autonomous navigation in off-road environments. RELLIS-3D is available at https://github.com/unmannedlab/RELLIS-3D

PDF Abstract

Datasets


Introduced in the Paper:

RELLIS-3D

Used in the Paper:

Cityscapes SemanticKITTI
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Semantic Segmentation RELLIS-3D Dataset salsanext Mean IoU (class) 43.07 # 1
3D Semantic Segmentation RELLIS-3D Dataset kpconv Mean IoU (class) 19.97 # 2
Semantic Segmentation RELLIS-3D Dataset gscnn Mean IoU (class) 50.13 # 2
Semantic Segmentation RELLIS-3D Dataset hrnet+OCR Mean IoU (class) 48.83 # 3

Methods


No methods listed for this paper. Add relevant methods here