Motion Segmentation

54 papers with code • 4 benchmarks • 7 datasets

Motion Segmentation is an essential task in many applications in Computer Vision and Robotics, such as surveillance, action recognition and scene understanding. The classic way to state the problem is the following: given a set of feature points that are tracked through a sequence of images, the goal is to cluster those trajectories according to the different motions they belong to. It is assumed that the scene contains multiple objects that are moving rigidly and independently in 3D-space.

Source: Robust Motion Segmentation from Pairwise Matches

Latest papers with no code

Neuromorphic Vision-based Motion Segmentation with Graph Transformer Neural Network

no code yet • 16 Apr 2024

Moreover, we introduce the Dynamic Object Mask-aware Event Labeling (DOMEL) approach for generating approximate ground-truth labels for event-based motion segmentation datasets.

Out of the Room: Generalizing Event-Based Dynamic Motion Segmentation for Complex Scenes

no code yet • 7 Mar 2024

Rapid and reliable identification of dynamic scene parts, also known as motion segmentation, is a key challenge for mobile sensors.

A Unified Model Selection Technique for Spectral Clustering Based Motion Segmentation

no code yet • 3 Mar 2024

Motion segmentation is a fundamental problem in computer vision and is crucial in various applications such as robotics, autonomous driving and action recognition.

WoodScape Motion Segmentation for Autonomous Driving -- CVPR 2023 OmniCV Workshop Challenge

no code yet • 31 Dec 2023

Motion segmentation is a complex yet indispensable task in autonomous driving.

Appearance-based Refinement for Object-Centric Motion Segmentation

no code yet • 18 Dec 2023

The goal of this paper is to discover, segment, and track independently moving objects in complex visual scenes.

Un-EvMoSeg: Unsupervised Event-based Independent Motion Segmentation

no code yet • 30 Nov 2023

Event cameras are a novel type of biologically inspired vision sensor known for their high temporal resolution, high dynamic range, and low power consumption.

Dynamo-Depth: Fixing Unsupervised Depth Estimation for Dynamical Scenes

no code yet • NeurIPS 2023

To resolve this issue, we introduce Dynamo-Depth, an unifying approach that disambiguates dynamical motion by jointly learning monocular depth, 3D independent flow field, and motion segmentation from unlabeled monocular videos.

Segmenting the motion components of a video: A long-term unsupervised model

no code yet • 2 Oct 2023

Human beings have the ability to continuously analyze a video and immediately extract the motion components.

Motion Segmentation from a Moving Monocular Camera

no code yet • 24 Sep 2023

We then construct two robust affinity matrices representing the pairwise object motion affinities throughout the whole video using epipolar geometry and the motion information provided by optical flow.

Joint Self-supervised Depth and Optical Flow Estimation towards Dynamic Objects

no code yet • 7 Sep 2023

In this work, we construct a joint inter-frame-supervised depth and optical flow estimation framework, which predicts depths in various motions by minimizing pixel wrap errors in bilateral photometric re-projections and optical vectors.