CounTR: Transformer-based Generalised Visual Counting

29 Aug 2022  ·  Chang Liu, Yujie Zhong, Andrew Zisserman, Weidi Xie ·

In this paper, we consider the problem of generalised visual object counting, with the goal of developing a computational model for counting the number of objects from arbitrary semantic categories, using arbitrary number of "exemplars", i.e. zero-shot or few-shot counting. To this end, we make the following four contributions: (1) We introduce a novel transformer-based architecture for generalised visual object counting, termed as Counting Transformer (CounTR), which explicitly capture the similarity between image patches or with given "exemplars" with the attention mechanism;(2) We adopt a two-stage training regime, that first pre-trains the model with self-supervised learning, and followed by supervised fine-tuning;(3) We propose a simple, scalable pipeline for synthesizing training images with a large number of instances or that from different semantic categories, explicitly forcing the model to make use of the given "exemplars";(4) We conduct thorough ablation studies on the large-scale counting benchmark, e.g. FSC-147, and demonstrate state-of-the-art performance on both zero and few-shot settings.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Counting CARPK CounTR MAE 5.75 # 3
RMSE 7.45 # 3
Object Counting FSC147 CounTR MAE(val) 13.13 # 5
RMSE(val) 49.83 # 6
MAE(test) 11.95 # 4
RMSE(test) 91.23 # 6

Methods