Seeing Objects in a Cluttered World: Computational Objectness from Motion in Video

2 Feb 2024  ·  Douglas Poland, Amar Saini ·

Perception of the visually disjoint surfaces of our cluttered world as whole objects, physically distinct from those overlapping them, is a cognitive phenomenon called objectness that forms the basis of our visual perception. Shared by all vertebrates and present at birth in humans, it enables object-centric representation and reasoning about the visual world. We present a computational approach to objectness that leverages motion cues and spatio-temporal attention using a pair of supervised spatio-temporal R(2+1)U-Nets. The first network detects motion boundaries and classifies the pixels at those boundaries in terms of their local foreground-background sense. This motion boundary sense (MBS) information is passed, along with a spatio-temporal object attention cue, to an attentional surface perception (ASP) module which infers the form of the attended object over a sequence of frames and classifies its 'pixels' as visible or obscured. The spatial form of the attention cue is flexible, but it must loosely track the attended object which need not be visible. We demonstrate the ability of this simple but novel approach to infer objectness from phenomenology without object models, and show that it delivers robust perception of individual attended objects in cluttered scenes, even with blur and camera shake. We show that our data diversity and augmentation minimizes bias and facilitates transfer to real video. Finally, we describe how this computational objectness capability can grow in sophistication and anchor a robust modular video object perception framework.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here