A Multi-Scale Recurrent Framework for Motion Segmentation With Event Camera

IEEE Access 2023  ·  Shaobo Zhang, Lei Sun, Kaiwei Wang ·

Motion segmentation is a formidable computer vision task, aiming to segment moving targets from a dynamic scene. In this paper, we choose to introduce an additional modality to bolster the robustness. The event camera is a bio-inspired sensor that accurately detects and captures intensity changes with exceptional temporal resolution and dynamic range, which is an optimal choice for motion segmentation. Therefore, we present a novel framework for event-based motion segmentation and propose Multi-Scale Recurrent Neural Network (MSRNN) to fuse temporal information efficiently. To our best knowledge, it is the first time that a multi-scale recurrent architecture is implemented in event-based motion segmentation. The proposed framework is evaluated through experiments conducted on the EV-IMO dataset. Our method achieves a mean Intersection-over-Union (mIoU) of 82.0%, which sets a new state-of-the-art in motion segmentation. To further validate our approach in arduous real-world scenarios, we introduce the Event Challenging Motion dataset, consisting of 350 images and corresponding events, in which our method outperforms the other methods by 1.5% in Intersection-over-Union (IoU).

PDF

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here