Interaction Region Visual Transformer for Egocentric Action Anticipation

25 Nov 2022  ·  Debaditya Roy, Ramanathan Rajendiran, Basura Fernando ·

Human-object interaction is one of the most important visual cues and we propose a novel way to represent human-object interactions for egocentric action anticipation. We propose a novel transformer variant to model interactions by computing the change in the appearance of objects and human hands due to the execution of the actions and use those changes to refine the video representation. Specifically, we model interactions between hands and objects using Spatial Cross-Attention (SCA) and further infuse contextual information using Trajectory Cross-Attention to obtain environment-refined interaction tokens. Using these tokens, we construct an interaction-centric video representation for action anticipation. We term our model InAViT which achieves state-of-the-art action anticipation performance on large-scale egocentric datasets EPICKTICHENS100 (EK100) and EGTEA Gaze+. InAViT outperforms other visual transformer-based methods including object-centric video representation. On the EK100 evaluation server, InAViT is the top-performing method on the public leaderboard (at the time of submission) where it outperforms the second-best model by 3.3% on mean-top5 recall.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Anticipation EGTEA InAViT Top-1 Accuracy 67.8 # 1
Action Anticipation EPIC-KITCHENS-100 InAViT Recall@5 25.89 # 1
Action Anticipation EPIC-KITCHENS-100 (test) InAViT recall@5 23.75 # 1

Methods