Paper

Infrared Small-Dim Target Detection with Transformer under Complex Backgrounds

The infrared small-dim target detection is one of the key techniques in the infrared search and tracking system. Since the local regions similar to infrared small-dim targets spread over the whole background, exploring the interaction information amongst image features in large-range dependencies to mine the difference between the target and background is crucial for robust detection. However, existing deep learning-based methods are limited by the locality of convolutional neural networks, which impairs the ability to capture large-range dependencies. Additionally, the small-dim appearance of the infrared target makes the detection model highly possible to miss detection. To this end, we propose a robust and general infrared small-dim target detection method with the transformer. We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range. Moreover, we design a feature enhancement module to learn discriminative features of small-dim targets to avoid miss detection. After that, to avoid the loss of the target information, we adopt a decoder with the U-Net-like skip connection operation to contain more information of small-dim targets. Finally, we get the detection result by a segmentation head. Extensive experiments on two public datasets show the obvious superiority of the proposed method over state-of-the-art methods and the proposed method has stronger cross-scene generalization and anti-noise performance.

Results in Papers With Code
(↓ scroll down to see all results)