The core task of structure analysis is to parse and segment the content of input document images. By identifying different elements in the image (such as text, charts, images, etc.), they are classified into predefined categories (e.g., pure text area, title area, table area, image area, list area, etc.), and the position and size of these regions in the document are determined.
The inference time only includes the model inference time and does not include the time for pre- or post-processing.
The layout detection model includes 20 common categories: document title, paragraph title, text, page number, abstract, table, references, footnotes, header, footer, algorithm, formula, formula number, image, table, seal, figure_table title, chart, and sidebar text and lists of references
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PP-DocLayout_plus-L | Inference Model/Training Model | 83.2 | 53.03 / 17.23 | 634.62 / 378.32 | 126.01 | A higher-precision layout area localization model trained on a self-built dataset containing Chinese and English papers, PPT, multi-layout magazines, contracts, books, exams, ancient books and research reports using RT-DETR-L |
The layout detection model includes 1 category: Block:
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PP-DocBlockLayout | Inference Model/Training Model | 95.9 | 34.60 / 28.54 | 506.43 / 256.83 | 123.92 | A layout block localization model trained on a self-built dataset containing Chinese and English papers, PPT, multi-layout magazines, contracts, books, exams, ancient books and research reports using RT-DETR-L |
The layout detection model includes 23 common categories: document title, paragraph title, text, page number, abstract, table of contents, references, footnotes, header, footer, algorithm, formula, formula number, image, figure title, table, table title, seal, chart title, chart, header image, footer image, and sidebar text
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PP-DocLayout-L | Inference Model/Training Model | 90.4 | 33.59 / 33.59 | 503.01 / 251.08 | 123.76 | A high-precision layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using RT-DETR-L. |
| PP-DocLayout-M | Inference Model/Training Model | 75.2 | 13.03 / 4.72 | 43.39 / 24.44 | 22.578 | A layout area localization model with balanced precision and efficiency, trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-L. |
| PP-DocLayout-S | Inference Model/Training Model | 70.9 | 11.54 / 3.86 | 18.53 / 6.29 | 4.834 | A high-efficiency layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-S. |
❗ The above list includes the 4 core models that are key supported by the text recognition module. The module actually supports a total of 12 full models, including several predefined models with different categories. The complete model list is as follows:
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PicoDet_layout_1x_table | Inference Model/Training Model | 97.5 | 9.57 / 6.63 | 27.66 / 16.75 | 7.4 | A high-efficiency layout area localization model trained on a self-built dataset using PicoDet-1x, capable of detecting table regions. |
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PicoDet-S_layout_3cls | Inference Model/Training Model | 88.2 | 8.43 / 3.44 | 17.60 / 6.51 | 4.8 | A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S. |
| PicoDet-L_layout_3cls | Inference Model/Training Model | 89.0 | 12.80 / 9.57 | 45.04 / 23.86 | 22.6 | A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L. |
| RT-DETR-H_layout_3cls | Inference Model/Training Model | 95.8 | 114.80 / 25.65 | 924.38 / 924.38 | 470.1 | A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H. |
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PicoDet_layout_1x | Inference Model/Training Model | 97.8 | 9.62 / 6.75 | 26.96 / 12.77 | 7.4 | A high-efficiency English document layout area localization model trained on the PubLayNet dataset using PicoDet-1x. |
| Model | Model Download Link | mAP(0.5) (%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Introduction |
|---|---|---|---|---|---|---|
| PicoDet-S_layout_17cls | Inference Model/Training Model | 87.4 | 8.80 / 3.62 | 17.51 / 6.35 | 4.8 | A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S. |
| PicoDet-L_layout_17cls | Inference Model/Training Model | 89.0 | 12.60 / 10.27 | 43.70 / 24.42 | 22.6 | A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L. |
| RT-DETR-H_layout_17cls | Inference Model/Training Model | 98.3 | 115.29 / 101.18 | 964.75 / 964.75 | 470.2 | A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H. |
| Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
|---|---|---|---|
| Normal Mode | FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
| High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to PaddleX Local Installation Tutorial
After installing the wheel package, a few lines of code can complete the inference of the structure analysis module. You can switch models under this module freely, and you can also integrate the model inference of the structure analysis module into your project. Before running the following code, please download the demo image to your local machine.
from paddlex import create_model
model_name = "PP-DocLayout_plus-L"
model = create_model(model_name=model_name)
output = model.predict("layout.jpg", batch_size=1, layout_nms=True)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
Note: The official models would be download from HuggingFace by first. PaddleX also support to specify the preferred source by setting the environment variable PADDLE_PDX_MODEL_SOURCE. The supported values are huggingface, aistudio, bos, and modelscope. For example, to prioritize using bos, set: PADDLE_PDX_MODEL_SOURCE="bos".
[xmin, ymin, xmax, ymax].
The visualized image is as follows:

Relevant methods, parameters, and explanations are as follows:
create_model instantiates a target detection model (here, PP-DocLayout_plus-L is used as an example). The detailed explanation is as follows:
| Parameter | Description | Type | Options | Default Value |
|---|---|---|---|---|
model_name |
Name of the model | str |
None | None |
model_dir |
Path to store the model | str |
None | None |
device |
The device used for model inference | str |
It supports specifying specific GPU card numbers, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu". | gpu:0 |
img_size |
Size of the input image; if not specified, the default PaddleX official model configuration will be used | int/list/None |
|
None |
threshold |
Threshold for filtering low-confidence prediction results; if not specified, the default PaddleX official model configuration will be used | float/dict/None |
|
None |
layout_nms |
Whether to use NMS post-processing to filter overlapping boxes; if not specified, the default PaddleX official model configuration will be used | bool/None |
|
None |
layout_unclip_ratio |
Scaling factor for the side length of the detection box; if not specified, the default PaddleX official model configuration will be used | float/list/dict/None |
|
|
layout_merge_bboxes_mode |
Merging mode for the detection boxes output by the model; if not specified, the default PaddleX official model configuration will be used | string/dict/None |
|
None |
use_hpip
| Whether to enable the high-performance inference plugin | bool |
None | False |
hpi_config |
High-performance inference configuration | dict | None |
None | None |
| Parameter | Description | Type | Options | Default Value |
|---|---|---|---|---|
input |
Data for prediction, supporting multiple input types | Python Var/str/list |
|
None |
batch_size |
Batch size | int |
Any integer greater than 0 | 1 |
threshold |
Threshold for filtering low-confidence prediction results | float/dict/None |
|
|
layout_nms |
Whether to use NMS post-processing to filter overlapping boxes; if not specified, the default PaddleX official model configuration will be used | bool/None |
|
None |
layout_unclip_ratio |
Scaling factor for the side length of the detection box; if not specified, the default PaddleX official model configuration will be used | float/list/dict/None |
|
|
layout_merge_bboxes_mode |
Merging mode for the detection boxes output by the model; if not specified, the default PaddleX official model configuration will be used | string/dict/None |
|
None |
| Attribute | Description |
|---|---|
json
| Get the prediction result in json format |
img |
Get the visualized image in dict format |