VideoCC3M (Video-Conceptual-Captions)

Introduced by Nagrani et al. in Learning Audio-Video Modalities from Image Captions

We propose a new, scalable video-mining pipeline which transfers captioning supervision from image datasets to video and audio. We use this pipeline to mine paired video and captions, using the Conceptual Captions3M image dataset as a seed dataset. Our resulting dataset VideoCC3M consists of millions of weakly paired clips with text captions and will be released publicly.

The core idea of our mining pipeline is to start with an image captioning dataset, and for each image-caption pair in a dataset, find frames in videos similar to the image. We then extract short video clips around the matching frames and transfer the caption to those clips. See the paper for the steps in detail.

We ran our mining pipeline with the image captioning dataset - Conceptual Captions 3M (CC3M). We only use the images in the dataset which are still publicly available online, which gives us 1.25 image-caption pairs. We apply our pipeline to online videos. We filter videos for viewcount > 1000, length < 20 minutes, uploaded within the last 10 years, but at least 90 days ago, and filter using content-appropriateness signals to get 150M videos. This gives us 10.3M clip-text pairs with 6.3M video clips (total 17.5K hours of video) and 970K unique captions. We call the resulting dataset VideoCC3M.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages