Learning to Fuse Things and Stuff

4 Dec 2018  ·  Jie Li, Allan Raventos, Arjun Bhargava, Takaaki Tagawa, Adrien Gaidon ·

We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks.

PDF Abstract

Results from the Paper


Ranked #26 on Panoptic Segmentation on Cityscapes val (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Panoptic Segmentation Cityscapes val TASCNet (ResNet-50) PQ 59.2 # 28
PQst 61.5 # 20
PQth 56 # 12
mIoU 77.8 # 23
AP 37.6 # 20
Panoptic Segmentation Cityscapes val TASCNet (ResNet-50, multi-scale) PQ 60.4 # 26
PQst 63.3 # 13
PQth 56.1 # 11
mIoU 78 # 22
AP 39 # 16
Panoptic Segmentation COCO test-dev TASCNet PQ 40.7 # 33
PQst 31.0 # 32
PQth 47.0 # 29

Methods