Video Transformer Network

1 Feb 2021  ·  Daniel Neimark, Omri Bar, Maya Zohar, Dotan Asselmann ·

This paper presents VTN, a transformer-based framework for video recognition. Inspired by recent developments in vision transformers, we ditch the standard approach in video action recognition that relies on 3D ConvNets and introduce a method that classifies actions by attending to the entire video sequence information. Our approach is generic and builds on top of any given 2D spatial network. In terms of wall runtime, it trains $16.1\times$ faster and runs $5.1\times$ faster during inference while maintaining competitive accuracy compared to other state-of-the-art methods. It enables whole video analysis, via a single end-to-end pass, while requiring $1.5\times$ fewer GFLOPs. We report competitive results on Kinetics-400 and present an ablation study of VTN properties and the trade-off between accuracy and inference speed. We hope our approach will serve as a new baseline and start a fresh line of research in the video recognition domain. Code and models are available at: https://github.com/bomri/SlowFast/blob/master/projects/vtn/README.md

PDF Abstract

Results from the Paper


Ranked #14 on Action Classification on MiT (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Classification Kinetics-400 ViT-B-VTN+ ImageNet-21K (84.0 [10]) Acc@5 94.2 # 78
Acc@1 79.8 # 98
Action Classification Kinetics-400 ViT-B-VTN (3 layers, ImageNet pretrain) Acc@1 78.6 # 118
Acc@5 93.7 # 86
Action Classification Kinetics-400 ViT-B-VTN (1 layer, ImageNet pretrain) Acc@5 93.4 # 94
Action Classification MiT VTN Top 1 Accuracy 37.4 # 14
Top 5 Accuracy 65.4 # 7

Methods


No methods listed for this paper. Add relevant methods here