Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs

ICCV 2019  ·  Lei Wang, Piotr Koniusz, Du. Q. Huynh ·

In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Ranked #3 on Scene Recognition on YUP++ (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Action Classification Charades HAF+BoW/FV/OFF halluc. +MSK×8/PN MAP 43.1 # 24
Action Recognition HMDB-51 HAF+BoW/FV halluc Average accuracy of 3 splits 82.48 # 14
Scene Recognition YUP++ HAF+BoW/FV halluc. Accuracy (%) 92.6 # 3

Methods


No methods listed for this paper. Add relevant methods here