Vision Grid Transformer for Document Layout Analysis

ICCV 2023  Â·  Cheng Da, Chuwei Luo, Qi Zheng, Cong Yao ·

Document pre-trained models and grid-based models have proven to be very effective on various tasks in Document AI. However, for the document layout analysis (DLA) task, existing document pre-trained models, even those pre-trained in a multi-modal fashion, usually rely on either textual features or visual features. Grid-based models for DLA are multi-modality but largely neglect the effect of pre-training. To fully leverage multi-modal information and exploit pre-training techniques to learn better representation for DLA, in this paper, we present VGT, a two-stream Vision Grid Transformer, in which Grid Transformer (GiT) is proposed and pre-trained for 2D token-level and segment-level semantic understanding. Furthermore, a new dataset named D$^4$LA, which is so far the most diverse and detailed manually-annotated benchmark for document layout analysis, is curated and released. Experiment results have illustrated that the proposed VGT model achieves new state-of-the-art results on DLA tasks, e.g. PubLayNet ($95.7\%$$\rightarrow$$96.2\%$), DocBank ($79.6\%$$\rightarrow$$84.1\%$), and D$^4$LA ($67.7\%$$\rightarrow$$68.8\%$). The code and models as well as the D$^4$LA dataset will be made publicly available ~\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery}.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Document Layout Analysis PubLayNet val VGT Text 0.950 # 3
Title 0.939 # 1
List 0.968 # 2
Table 0.981 # 1
Figure 0.971 # 3
Overall 0.962 # 1
Document Layout Analysis PubLayNet val ResNext-101-32Ă—8d Text 0.930 # 10
Title 0.862 # 10
List 0.940 # 7
Table 0.976 # 6
Figure 0.968 # 5
Overall 0.935 # 8

Methods