no code implementations • PVLAM (LREC) 2022 • Erika Loc, Keith Curtis, George Awad, Shahzad Rajput, Ian Soboroff
The objective of the framework is to provide a guide to annotating long duration videos to support tasks and challenges in the video and multimedia understanding domains.
no code implementations • 22 Jun 2023 • George Awad, Keith Curtis, Asad Butt, Jonathan Fiscus, Afzal Godil, Yooyoung Lee, Andrew Delgado, Eliot Godard, Lukas Diduch, Jeffrey Liu, Yvette Graham, Georges Quenot
The TREC Video Retrieval Evaluation (TRECVID) is a TREC-style video analysis and retrieval evaluation with the goal of promoting progress in research and development of content-based exploitation and retrieval of information from digital video via open, tasks-based evaluation supported by metrology.
no code implementations • 27 Apr 2021 • George Awad, Asad A. Butt, Keith Curtis, Jonathan Fiscus, Afzal Godil, Yooyoung Lee, Andrew Delgado, Jesse Zhang, Eliot Godard, Baptiste Chocot, Lukas Diduch, Jeffrey Liu, Alan F. Smeaton, Yvette Graham, Gareth J. F. Jones, Wessel Kraaij, Georges Quenot
In total, 29 teams from various research organizations worldwide completed one or more of the following six tasks: 1.
no code implementations • 21 Sep 2020 • George Awad, Asad A. Butt, Keith Curtis, Yooyoung Lee, Jonathan Fiscus, Afzal Godil, Andrew Delgado, Jesse Zhang, Eliot Godard, Lukas Diduch, Alan F. Smeaton, Yvette Graham, Wessel Kraaij, Georges Quenot
The TREC Video Retrieval Evaluation (TRECVID) 2019 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in research and development of content-based exploitation and retrieval of information from digital video via open, metrics-based evaluation.
no code implementations • 1 May 2020 • Keith Curtis, George Awad, Shahzad Rajput, Ian Soboroff
In this paper we propose a new evaluation challenge and direction in the area of High-level Video Understanding.
no code implementations • 29 Oct 2017 • Yvette Graham, George Awad, Alan Smeaton
We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video.