Motion Segmentation
54 papers with code • 4 benchmarks • 7 datasets
Motion Segmentation is an essential task in many applications in Computer Vision and Robotics, such as surveillance, action recognition and scene understanding. The classic way to state the problem is the following: given a set of feature points that are tracked through a sequence of images, the goal is to cluster those trajectories according to the different motions they belong to. It is assumed that the scene contains multiple objects that are moving rigidly and independently in 3D-space.
Latest papers
Moving Object Segmentation: All You Need Is SAM (and Flow)
The objective of this paper is motion segmentation -- discovering and segmenting the moving objects in a video.
Motion2Language, unsupervised learning of synchronized semantic motion segmentation
We find that both contributions to the attention mechanism and the encoder architecture additively improve the quality of generated text (BLEU and semantic equivalence), but also of synchronization.
RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud
Mobile autonomy relies on the precise perception of dynamic environments.
A Multi-Scale Recurrent Framework for Motion Segmentation With Event Camera
Motion segmentation is a formidable computer vision task, aiming to segment moving targets from a dynamic scene.
Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping
The Gestalt law of common fate, i. e., what move at the same speed belong together, has inspired unsupervised object discovery based on motion segmentation.
Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision
This work proposes a novel approach to 4D radar-based scene flow estimation via cross-modal learning.
Unsupervised Space-Time Network for Temporally-Consistent Segmentation of Multiple Motions
In this paper, we propose an original unsupervised spatio-temporal framework for motion segmentation from optical flow that fully investigates the temporal dimension of the problem.
GMA3D: Local-Global Attention Learning to Estimate Occluded Motions of Scene Flow
Scene flow represents the motion information of each point in the 3D point clouds.
DytanVO: Joint Refinement of Visual Odometry and Motion Segmentation in Dynamic Environments
Learning-based visual odometry (VO) algorithms achieve remarkable performance on common static scenes, benefiting from high-capacity models and massive annotated data, but tend to fail in dynamic, populated environments.
ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving Cameras in the Wild
In addition, our method is able to retain reasonable accuracy of camera poses on fully static scenes, which consistently outperforms strong state-of-the-art dense correspondence based methods with end-to-end deep learning, demonstrating the potential of dense indirect methods based on optical flow and point trajectories.