Revisiting Classification Perspective on Scene Text Recognition

22 Feb 2021  ·  Hongxiang Cai, Jun Sun, Yichao Xiong ·

The prevalent perspectives of scene text recognition are from sequence to sequence (seq2seq) and segmentation. Nevertheless, the former is composed of many components which makes implementation and deployment complicated, while the latter requires character level annotations that is expensive. In this paper, we revisit classification perspective that models scene text recognition as an image classification problem. Classification perspective has a simple pipeline and only needs word level annotations. We revive classification perspective by devising a scene text recognition model named as CSTR, which performs as well as methods from other perspectives. The CSTR model consists of CPNet (classification perspective network) and SPPN (separated conv with global average pooling prediction network). CSTR is as simple as image classification model like ResNet \cite{he2016deep} which makes it easy to implement and deploy. We demonstrate the effectiveness of the classification perspective on scene text recognition with extensive experiments. Futhermore, CSTR achieves nearly state-of-the-art performance on six public benchmarks including regular text, irregular text. The code will be available at https://github.com/Media-Smart/vedastr.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Scene Text Recognition ICDAR 2003 CSTR Accuracy 94.8 # 6
Scene Text Recognition ICDAR2013 CSTR Accuracy 93.2 # 25
Scene Text Recognition ICDAR2015 CSTR Accuracy 81.6 # 15
Scene Text Recognition SVT CSTR Accuracy 90.6 # 22

Methods