Demo2Vec: Reasoning Object Affordances From Online Videos

Watching expert demonstrations is an important way for humans and robots to reason about affordances of unseen objects. In this paper, we consider the problem of reasoning object affordances through the feature embedding of demonstration videos. We design the Demo2Vec model which learns to extract embedded vectors of demonstration videos and predicts the interaction region and the action label on a target image of the same object. We introduce the Online Product Review dataset for Affordance (OPRA) by collecting and labeling diverse YouTube product review videos. Our Demo2Vec model outperforms various recurrent neural network baselines on the collected dataset.

PDF Abstract

Datasets


Introduced in the Paper:

OPRA
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Video-to-image Affordance Grounding OPRA Demo2Vec KLD 2.34 # 3
Top-1 Action Accuracy 40.79 # 3
Video-to-image Affordance Grounding OPRA (28x28) Demo2Vec KLD 1.20 # 2
SIM 0.48 # 2
AUC-J 0.85 # 2

Methods


No methods listed for this paper. Add relevant methods here