ST-HOI: A Spatial-Temporal Baseline for Human-Object Interaction Detection in Videos

25 May 2021  ·  Meng-Jiun Chiou, Chun-Yu Liao, Li-Wei Wang, Roger Zimmermann, Jiashi Feng ·

Detecting human-object interactions (HOI) is an important step toward a comprehensive visual understanding of machines. While detecting non-temporal HOIs (e.g., sitting on a chair) from static images is feasible, it is unlikely even for humans to guess temporal-related HOIs (e.g., opening/closing a door) from a single video frame, where the neighboring frames play an essential role. However, conventional HOI methods operating on only static images have been used to predict temporal-related interactions, which is essentially guessing without temporal contexts and may lead to sub-optimal performance. In this paper, we bridge this gap by detecting video-based HOIs with explicit temporal information. We first show that a naive temporal-aware variant of a common action detection baseline does not work on video-based HOIs due to a feature-inconsistency issue. We then propose a simple yet effective architecture named Spatial-Temporal HOI Detection (ST-HOI) utilizing temporal information such as human and object trajectories, correctly-localized visual features, and spatial-temporal masking pose features. We construct a new video HOI benchmark dubbed VidHOI where our proposed approach serves as a solid baseline.

PDF Abstract

Datasets


Introduced in the Paper:

VidHOI
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human-Object Interaction Anticipation VidHOI STTRAN Person-wise Top5: t=1(mAP@0.5) 29.09 # 3
Person-wise Top5: t=3(mAP@0.5) 27.59 # 3
Person-wise Top5: t=5(mAP@0.5) 27.32 # 3
Human-Object Interaction Detection VidHOI STTRAN Oracle: Full (mAP@0.5) 28.32 # 3
Oracle: Rare (mAP@0.5) 17.74 # 3
Oracle: Non-Rare (mAP@0.5) 42.08 # 3
Detection: Full (mAP@0.5) 7.61 # 3
Detection: Non-Rare (mAP@0.5) 13.18 # 3
Detection: Rare (mAP@0.5) 3.33 # 3

Methods


No methods listed for this paper. Add relevant methods here