ActionNet-VE Dataset: A Dataset for Describing Visual Events by Extending VIRAT Ground 2.0

This paper introduces a dataset for recognizing and describing interactive events between objects of interest including persons, cars, bikes, and carried objects. Although there have been many video datasets for human activity recognition, most of them focus on persons and their actions and sometimes ignore the specific information on related objects, such as their object type and minimum bounding boxes, in annotations. ActionNet-VE dataset was designed to include full annotations on all objects and events of interest occurred in a video clip for describing the semantics of the event. The dataset adopt 75 video clips from VIRAT Ground 2.0, and extend annotations on the events and their related objects. In addition, the dataset describes semantics of each events by using elements of sentences, such as verb, subject, and objects.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here