Progressively Normalized Self-Attention Network for Video Polyp Segmentation

18 May 2021  ·  Ge-Peng Ji, Yu-Cheng Chou, Deng-Ping Fan, Geng Chen, Huazhu Fu, Debesh Jha, Ling Shao ·

Existing video polyp segmentation (VPS) models typically employ convolutional neural networks (CNNs) to extract features. However, due to their limited receptive fields, CNNs can not fully exploit the global temporal and spatial information in successive video frames, resulting in false-positive segmentation results. In this paper, we propose the novel PNS-Net (Progressively Normalized Self-attention Network), which can efficiently learn representations from polyp videos with real-time speed (~140fps) on a single RTX 2080 GPU and no post-processing. Our PNS-Net is based solely on a basic normalized self-attention block, equipping with recurrence and CNNs entirely. Experiments on challenging VPS datasets demonstrate that the proposed PNS-Net achieves state-of-the-art performance. We also conduct extensive experiments to study the effectiveness of the channel split, soft-attention, and progressive learning strategy. We find that our PNS-Net works well under different settings, making it a promising solution to the VPS task.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Polyp Segmentation SUN-SEG-Easy (Unseen) PNSNet S measure 0.767 # 6
mean E-measure 0.744 # 7
weighted F-measure 0.616 # 5
mean F-measure 0.664 # 5
Dice 0.676 # 7
Sensitivity 0.574 # 5
Video Polyp Segmentation SUN-SEG-Hard (Unseen) PNSNet S-Measure 0.767 # 6
mean E-measure 0.755 # 5
weighted F-measure 0.609 # 5
mean F-measure 0.656 # 5
Dice 0.675 # 7
Sensitivity 0.579 # 5

Methods


No methods listed for this paper. Add relevant methods here