|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Second, frame-based models perform quite well on action recognition; is pre-training for good image features sufficient or is pre-training for spatio-temporal features valuable for optimal transfer learning?
In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition.
#3 best model for Action Recognition In Videos on Sports-1M
We empirically demonstrate a general and robust grid schedule that yields a significant out-of-the-box training speedup without a loss in accuracy for different models (I3D, non-local, SlowFast), datasets (Kinetics, Something-Something, Charades), and training settings (with and without pre-training, 128 GPUs or 1 GPU).
SOTA for Action Detection on Charades
The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels.
#13 best model for Action Recognition In Videos on UCF101
Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow.
SOTA for Action Recognition In Videos on ActivityNet (using extra training data)
Dynamics of human body skeletons convey significant information for human action recognition.
#2 best model for Action Recognition In Videos on IRD
Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.
#10 best model for Action Classification on Moments in Time (Top 5 Accuracy metric)
The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks.
#2 best model for Action Recognition In Videos on UCF101