Video Description

26 papers with code • 0 benchmarks • 7 datasets

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Delving Deeper into the Decoder for Video Captioning

WingsBrokenAngel/delving-deeper-into-the-decoder-for-video-captioning 16 Jan 2020

Video captioning is an advanced multi-modal task which aims to describe a video clip using a natural language sentence.

37
16 Jan 2020

VizSeq: A Visual Analysis Toolkit for Text Generation Tasks

facebookresearch/vizseq IJCNLP 2019

Automatic evaluation of text generation tasks (e. g. machine translation, text summarization, image captioning and video description) usually relies heavily on task-specific metrics, such as BLEU and ROUGE.

438
12 Sep 2019

VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research

eric-xw/Video-guided-Machine-Translation ICCV 2019

We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.

48
06 Apr 2019

Grounded Video Description

facebookresearch/grounded-video-description CVPR 2019

Our dataset, ActivityNet-Entities, augments the challenging ActivityNet Captions dataset with 158k bounding box annotations, each grounding a noun phrase.

311
17 Dec 2018

Adversarial Inference for Multi-Sentence Video Description

jamespark3922/adv-inf CVPR 2019

Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video.

34
13 Dec 2018

Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7

hudaAlamri/DSTC7-Audio-Visual-Scene-Aware-Dialog-AVSD-Challenge 1 Jun 2018

Scene-aware dialog systems will be able to have conversations with users about the objects and events around them.

53
01 Jun 2018

Predicting Visual Features from Text for Image and Video Caption Retrieval

danieljf24/w2vv 5 Sep 2017

This paper strives to find amidst a set of sentences the one best describing the content of a given image or video.

69
05 Sep 2017

Egocentric Video Description based on Temporally-Linked Sequences

MarcBS/TMA 7 Apr 2017

We propose a novel methodology that exploits information from temporally neighboring events, matching precisely the nature of egocentric sequences.

11
07 Apr 2017

Memory-augmented Attention Modelling for Videos

rasoolfa/videocap 7 Nov 2016

We present a method to improve video description generation by modeling higher-order interactions between video frames and described concepts.

10
07 Nov 2016