Transformer-based Approach for Document Understanding

We present an end-to-end transformer-based framework named TRDLU for the task of Document Layout Understanding (DLU). DLU is the fundamental task to automatically understand document structures. To accurately detect content boxes and classify them into semantically meaningful classes from various formats of documents is still an open challenge. Recently, transformer-based detection neural networks have shown their capability over traditional convolutional-based methods in the object detection area. In this paper, we consider DLU as a detection task, and introduce TRDLU which integrates transformer-based vision backbone and transformer encoder-decoder as detection pipeline. TRDLU is only a visual feature-based framework, but its performance is even better than multi-modal feature-based models. To the best of our knowledge, this is the first study of employing a fully transformer-based framework in DLU tasks. We evaluated TRDLU on three different DLU benchmark datasets, each with strong baselines. TRDLU outperforms the current state-of-the-art methods on all of them.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Document Layout Analysis PubLayNet val TRDLU Text 0.958 # 2
Title 0.921 # 3
List 0.975 # 1
Table 0.976 # 6
Figure 0.966 # 6
Overall 0.959 # 2

Methods


No methods listed for this paper. Add relevant methods here