Video-based Generative Performance Benchmarking (Contextual Understanding)

11 papers with code • 1 benchmarks • 1 datasets

The benchmark evaluates a generative Video Conversational Model with respect to Contextual Understanding.

We curate a test set based on the ActivityNet-200 dataset, featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We develop an evaluation pipeline using the GPT-3.5 model that assigns a relative score to the generated predictions on a scale of 1-5.

Most implemented papers

MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens

Vision-CAIR/MiniGPT4-video 4 Apr 2024

This paper introduces MiniGPT4-Video, a multimodal Large Language Model (LLM) designed specifically for video understanding.