ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding

In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper, we introduce ActivityNet: a new large-scale video benchmark for human activity understanding. Our new benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity categories with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 hours of video. We illustrate three scenarios in which ActivityNet can be used to benchmark and compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

PDF Abstract

Datasets


Introduced in the Paper:

ActivityNet

Used in the Paper:

ImageNet HMDB51 MPII Sports-1M

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here