MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation

CVPR 2023  ·  Yong Yang, Qiong Chen, Yuan Feng, Tianlin Huang ·

Existing few-shot segmentation methods are based on the meta-learning strategy and extract instance knowledge from a support set and then apply the knowledge to segment target objects in a query set. However, the extracted knowledge is insufficient to cope with the variable intra-class differences since the knowledge is obtained from a few samples in the support set. To address the problem, we propose a multi-information aggregation network (MIANet) that effectively leverages the general knowledge, i.e., semantic word embeddings, and instance information for accurate segmentation. Specifically, in MIANet, a general information module (GIM) is proposed to extract a general class prototype from word embeddings as a supplement to instance information. To this end, we design a triplet loss that treats the general class prototype as an anchor and samples positive-negative pairs from local features in the support set. The calculated triplet loss can transfer semantic similarities among language identities from a word embedding space to a visual representation space. To alleviate the model biasing towards the seen training classes and to obtain multi-scale information, we then introduce a non-parametric hierarchical prior module (HPM) to generate unbiased instance-level information via calculating the pixel-level similarity between the support and query image features. Finally, an information fusion module (IFM) combines the general and instance information to make predictions for the query image. Extensive experiments on PASCAL-5i and COCO-20i show that MIANet yields superior performance and set a new state-of-the-art. Code is available at https://github.com/Aldrich2y/MIANet.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Semantic Segmentation COCO-20i (1-shot) MIANet (ResNet-50) Mean IoU 47.66 # 11
FB-IoU 71.51 # 5
Few-Shot Semantic Segmentation COCO-20i (1-shot) MIANet (VGG-16) Mean IoU 45.69 # 24
FB-IoU 71.01 # 8
Few-Shot Semantic Segmentation COCO-20i (5-shot) MIANet (VGG-16) Mean IoU 51.03 # 25
FB-IoU 73.81 # 8
Few-Shot Semantic Segmentation COCO-20i (5-shot) MIANet (ResNet-50) Mean IoU 51.65 # 21
FB-IoU 73.13 # 12
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) MIANet (ResNet-101) Mean IoU 67.63 # 21
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) MIANet (ResNet-50) Mean IoU 68.72 # 14
FB-IoU 79.54 # 9
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) MIANet (VGG-16) Mean IoU 67.10 # 26
FB-IoU 79.22 # 12
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) MIANet (ResNet-50) Mean IoU 71.59 # 22
FB-IoU 82.2 # 12
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) MIANet (VGG-16) Mean IoU 71.99 # 17
FB-IoU 82.69 # 10

Methods