How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection

Object detection (OD) in computer vision has made significant progress in recent years, transitioning from closed-set labels to open-vocabulary detection (OVD) based on large-scale vision-language pre-training (VLP). However, current evaluation methods and datasets are limited to testing generalization over object types and referral expressions, which do not provide a systematic, fine-grained, and accurate benchmark of OVD models' abilities. In this paper, we propose a new benchmark named OVDEval, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue. Extensive experimental results show that existing top OVD models all fail on the new tasks except for simple object types, demonstrating the value of the proposed dataset in pinpointing the weakness of current OVD models and guiding future research. Furthermore, the proposed NMS-AP metric is verified by experiments to provide a much more truthful evaluation of OVD models, whereas traditional AP metrics yield deceptive results. Data is available at \url{https://github.com/om-ai-lab/OVDEval}

PDF Abstract

Datasets


Introduced in the Paper:

OVDEval

Used in the Paper:

MS COCO Visual Genome LVIS

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Negation OVDEval GLIP NMS-AP 29.3 # 1
Negation OVDEval FIBER NMS-AP 28.7 # 2
Proper Noun OVDEval FIBER NMS-AP 6.03 # 2
Proper Noun OVDEval GLIP NMS-AP 11.7 # 1

Methods


No methods listed for this paper. Add relevant methods here