Universal Segmentation at Arbitrary Granularity with Language Instruction

4 Dec 2023  ยท  Yong liu, Cairong Zhang, Yitong Wang, Jiahao Wang, Yujiu Yang, Yansong Tang ยท

This paper aims to achieve universal segmentation of arbitrary semantic level. Despite significant progress in recent years, specialist segmentation approaches are limited to specific tasks and data distribution. Retraining a new model for adaptation to new scenarios or settings takes expensive computation and time cost, which raises the demand for versatile and universal segmentation model that can cater to various granularity. Although some attempts have been made for unifying different segmentation tasks or generalization to various scenarios, limitations in the definition of paradigms and input-output spaces make it difficult for them to achieve accurate understanding of content at arbitrary granularity. To this end, we present UniLSeg, a universal segmentation model that can perform segmentation at any semantic level with the guidance of language instructions. For training UniLSeg, we reorganize a group of tasks from original diverse distributions into a unified data format, where images with texts describing segmentation targets as input and corresponding masks are output. Combined with a automatic annotation engine for utilizing numerous unlabeled data, UniLSeg achieves excellent performance on various tasks and settings, surpassing both specialist and unified segmentation models.

PDF Abstract

Results from the Paper


 Ranked #1 on Referring Expression Segmentation on RefCOCOg-test (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Referring Expression Segmentation RefCOCOg-test UniLSeg-20 Overall IoU 79.47 # 2
Referring Expression Segmentation RefCOCOg-test UniLSeg-100 Overall IoU 80.54 # 1
Referring Expression Segmentation RefCOCOg-val UniLSeg-100 Overall IoU 79.27 # 1
Referring Expression Segmentation RefCOCOg-val UniLSeg-20 Overall IoU 78.41 # 2
Referring Expression Segmentation RefCOCO testA UniLSeg-100 Overall IoU 83.17 # 2
Referring Expression Segmentation RefCOCO+ testA UniLSeg-20 Overall IoU 77.02 # 2
Referring Expression Segmentation RefCOCO+ testA UniLSeg-100 Overall IoU 78.29 # 1
Referring Expression Segmentation RefCOCO+ test B UniLSeg-100 Overall IoU 68.15 # 1
Referring Expression Segmentation RefCOCO+ test B UniLSeg-20 Overall IoU 66.99 # 2
Referring Expression Segmentation RefCoCo val UniLSeg-100 Overall IoU 81.74 # 3
Referring Expression Segmentation RefCOCO+ val UniLSeg-100 Overall IoU 73.18 # 2
Referring Expression Segmentation RefCOCO+ val UniLSeg-20 Overall IoU 72.70 # 3
Referring Expression Segmentation Refer-YouTube-VOS (2021 public validation) UniLSeg-100 J&F 64.9 # 11
J 62.8 # 10
F 67.0 # 10

Methods


No methods listed for this paper. Add relevant methods here