Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in data entry, information retrieval, and document analysis. By using computer vision and machine learning algorithms, table recognition can convert complex table information into an editable format, making it easier for users to further process and analyze data.
The General Table Recognition v2 Pipeline is designed to solve table recognition tasks by identifying tables in images and outputting them in HTML format. Unlike the General Table Recognition Pipeline, this pipeline introduces two additional modules: table classification and table cell detection, which are linked with the table structure recognition module to complete the table recognition task. This pipeline can achieve accurate table predictions and is applicable in various fields such as general, manufacturing, finance, and transportation. It also provides flexible service deployment options, supporting multiple programming languages on various hardware. Additionally, it offers secondary development capabilities, allowing you to train and fine-tune models on your own dataset, with seamless integration of the trained models.
❗ The General Table Recognition v2 Pipeline is still being optimized and the final version will be released in the next version of PaddleX. In order to maintain the stability of use, you can use the General Table Recognition Pipeline for table processing first, and we will release a notice when the final version of v2 is open-sourced, so please stay tuned!

The General Table Recognition v2 Pipeline includes mandatory modules such as table structure recognition, table classification, table cell localization, text detection, and text recognition, as well as optional modules like layout area detection, document image orientation classification, and text image correction.
If you prioritize model accuracy, choose a model with higher accuracy; if you care more about inference speed, choose a model with faster inference speed; if you are concerned about model storage size, choose a model with a smaller storage size.
Table Recognition Module Models:
| Model | Model Download Link | Accuracy (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Description |
|---|---|---|---|---|---|---|
| SLANeXt_wired | Inference Model/Training Model | 69.65 | -- | -- | 351M | The SLANeXt series is a new generation of table structure recognition models developed by the Baidu PaddlePaddle Vision Team. Compared to SLANet and SLANet_plus, SLANeXt focuses on recognizing table structures and has trained dedicated weights for both wired and wireless tables, significantly improving the recognition capabilities for all types of tables, especially for wired tables. |
| SLANeXt_wireless | Inference Model/Training Model | 63.69 | 522.536 | 1845.37 | 6.9 M |
Table Classification Module Models:
| Model | Model Download Link | Top1 Acc(%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
|---|---|---|---|---|---|
| PP-LCNet_x1_0_table_cls | Inference Model/Training Model | 94.2 | 2.35 / 0.47 | 4.03 / 1.35 | 6.6M |
Table Cell Detection Module Models:
| Model | Model Download Link | mAP(%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| RT-DETR-L_wired_table_cell_det | Inference Model/Training Model | 82.7 | 35.00 / 10.45 | 495.51 / 495.51 | 124M | RT-DETR is the first real-time end-to-end object detection model. The Baidu PaddlePaddle Vision Team, based on RT-DETR-L as the base model, has completed pretraining on a self-built table cell detection dataset, achieving good performance for both wired and wireless table cell detection. |
| RT-DETR-L_wireless_table_cell_det | Inference Model/Training Model |
Text Detection Module Models:
| Model | Model Download Link | Detection Hmean (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| PP-OCRv4_server_det | Inference Model/Training Model | 82.69 | 83.34 / 80.91 | 442.58 / 442.58 | 109 | The server-side text detection model of PP-OCRv4, with higher accuracy, suitable for deployment on high-performance servers. |
| PP-OCRv4_mobile_det | Inference Model/Training Model | 77.79 | 8.79 / 3.13 | 51.00 / 28.58 | 4.7 | The mobile text detection model of PP-OCRv4, with higher efficiency, suitable for deployment on edge devices. |
Text Recognition Module Models:
| Model | Model Download Link | Recognition Avg Accuracy(%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| PP-OCRv4_server_rec_doc | Inference Model/Training Model | 81.53 | 6.65 / 2.38 | 32.92 / 32.92 | 74.7 M | PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data based on PP-OCRv4_server_rec. It has added the ability to recognize some traditional Chinese characters, Japanese, and special characters, and can support the recognition of more than 15,000 characters. In addition to improving the text recognition capability related to documents, it also enhances the general text recognition capability. |
| PP-OCRv4_mobile_rec | Inference Model/Training Model | 78.74 | 4.82 / 1.20 | 16.74 / 4.64 | 10.6 M | The lightweight recognition model of PP-OCRv4 has high inference efficiency and can be deployed on various hardware devices, including edge devices. |
| PP-OCRv4_server_rec | Inference Model/Training Model | 80.61 | 6.58 / 2.43 | 33.17 / 33.17 | 71.2 M | The server-side model of PP-OCRv4 offers high inference accuracy and can be deployed on various types of servers. |
| en_PP-OCRv4_mobile_rec | Inference Model/Training Model | 70.39 | 4.81 / 0.75 | 16.10 / 5.31 | 6.8 M | The ultra-lightweight English recognition model, trained based on the PP-OCRv4 recognition model, supports the recognition of English letters and numbers. |
| Model | Model Download Link | Recognition Avg Accuracy(%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| PP-OCRv4_server_rec_doc | Inference Model/Training Model | 81.53 | 6.65 / 2.38 | 32.92 / 32.92 | 74.7 M | PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data based on PP-OCRv4_server_rec. It has added the recognition capabilities for some traditional Chinese characters, Japanese, and special characters. The number of recognizable characters is over 15,000. In addition to the improvement in document-related text recognition, it also enhances the general text recognition capability. |
| PP-OCRv4_mobile_rec | Inference Model/Training Model | 78.74 | 4.82 / 1.20 | 16.74 / 4.64 | 10.6 M | The lightweight recognition model of PP-OCRv4 has high inference efficiency and can be deployed on various hardware devices, including edge devices. |
| PP-OCRv4_server_rec | Inference Model/Trained Model | 80.61 | 6.58 / 2.43 | 33.17 / 33.17 | 71.2 M | The server-side model of PP-OCRv4 offers high inference accuracy and can be deployed on various types of servers. |
| PP-OCRv3_mobile_rec | Inference Model/Training Model | 72.96 | 5.87 / 1.19 | 9.07 / 4.28 | 9.2 M | PP-OCRv3’s lightweight recognition model is designed for high inference efficiency and can be deployed on a variety of hardware devices, including edge devices. |
| Model | Model Download Link | Recognition Avg Accuracy(%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| ch_SVTRv2_rec | Inference Model/Training Model | 68.81 | 8.08 / 2.74 | 50.17 / 42.50 | 73.9 M | SVTRv2 is a server text recognition model developed by the OpenOCR team of Fudan University's Visual and Learning Laboratory (FVL). It won the first prize in the PaddleOCR Algorithm Model Challenge - Task One: OCR End-to-End Recognition Task. The end-to-end recognition accuracy on the A list is 6% higher than that of PP-OCRv4. |
| Model | Model Download Link | Recognition Avg Accuracy(%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| ch_RepSVTR_rec | Inference Model/Training Model | 65.07 | 5.93 / 1.62 | 20.73 / 7.32 | 22.1 M | The RepSVTR text recognition model is a mobile text recognition model based on SVTRv2. It won the first prize in the PaddleOCR Algorithm Model Challenge - Task One: OCR End-to-End Recognition Task. The end-to-end recognition accuracy on the B list is 2.5% higher than that of PP-OCRv4, with the same inference speed. |
| Model | Model Download Link | Recognition Avg Accuracy(%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| en_PP-OCRv4_mobile_rec | Inference Model/Training Model | 70.39 | 4.81 / 0.75 | 16.10 / 5.31 | 6.8 M | The ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model supports the recognition of English and numbers. |
| en_PP-OCRv3_mobile_rec | Inference Model/Training Model | 70.69 | 5.44 / 0.75 | 8.65 / 5.57 | 7.8 M | The ultra-lightweight English recognition model trained based on the PP-OCRv3 recognition model supports the recognition of English and numbers. |
| Model | Model Download Link | Recognition Avg Accuracy(%) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| korean_PP-OCRv3_mobile_rec | Inference Model/Training Model | 60.21 | 5.40 / 0.97 | 9.11 / 4.05 | 8.6 M | The ultra-lightweight Korean recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Korean and numbers. |
| japan_PP-OCRv3_mobile_rec | Inference Model/Training Model | 45.69 | 5.70 / 1.02 | 8.48 / 4.07 | 8.8 M | The ultra-lightweight Japanese recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Japanese and numbers. |
| chinese_cht_PP-OCRv3_mobile_rec | Inference Model/Training Model | 82.06 | 5.90 / 1.28 | 9.28 / 4.34 | 9.7 M | The ultra-lightweight Traditional Chinese recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Traditional Chinese and numbers. |
| te_PP-OCRv3_mobile_rec | Inference Model/Training Model | 95.88 | 5.42 / 0.82 | 8.10 / 6.91 | 7.8 M | The ultra-lightweight Telugu recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Telugu and numbers. |
| ka_PP-OCRv3_mobile_rec | Inference Model/Training Model | 96.96 | 5.25 / 0.79 | 9.09 / 3.86 | 8.0 M | The ultra-lightweight Kannada recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Kannada and numbers. |
| ta_PP-OCRv3_mobile_rec | Inference Model/Training Model | 76.83 | 5.23 / 0.75 | 10.13 / 4.30 | 8.0 M | The ultra-lightweight Tamil recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Tamil and numbers. |
| latin_PP-OCRv3_mobile_rec | Inference Model/Training Model | 76.93 | 5.20 / 0.79 | 8.83 / 7.15 | 7.8 M | The ultra-lightweight Latin recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Latin script and numbers. |
| arabic_PP-OCRv3_mobile_rec | Inference Model/Training Model | 73.55 | 5.35 / 0.79 | 8.80 / 4.56 | 7.8 M | The ultra-lightweight Arabic script recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Arabic script and numbers. |
| cyrillic_PP-OCRv3_mobile_rec | Inference Model/Training Model | 94.28 | 5.23 / 0.76 | 8.89 / 3.88 | 7.9 M | The ultra-lightweight cyrillic alphabet recognition model trained based on the PP-OCRv3 recognition model supports the recognition of cyrillic letters and numbers. |
| devanagari_PP-OCRv3_mobile_rec | Inference Model/Training Model | 96.44 | 5.22 / 0.79 | 8.56 / 4.06 | 7.9 M | The ultra-lightweight Devanagari script recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Devanagari script and numbers. |
Layout Region Detection Module Models (Optional):
| Model | Model Download Link | mAP(0.5) (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Description |
|---|---|---|---|---|---|---|
| PP-DocLayout-L | Inference Model/Training Model | 90.4 | 34.6244 / 10.3945 | 510.57 / - | 123.76 M | A high-precision layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on RT-DETR-L. |
| PP-DocLayout-M | Inference Model/Training Model | 75.2 | 13.3259 / 4.8685 | 44.0680 / 44.0680 | 22.578 | A balanced precision and efficiency layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on PicoDet-L. |
| PP-DocLayout-S | Inference Model/Training Model | 70.9 | 8.3008 / 2.3794 | 10.0623 / 9.9296 | 4.834 | A high-efficiency layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on PicoDet-S. |
| Model | Model Download Link | mAP(0.5) (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) | Description |
|---|---|---|---|---|---|---|
| PicoDet_layout_1x_table | Inference Model/Training Model | 97.5 | 8.02 / 3.09 | 23.70 / 20.41 | 7.4 M | A high-efficiency layout region localization model trained on a self-built dataset using PicoDet-1x, capable of locating 1 type of region: tables |
| Model | Model Download Link | mAP(0.5) (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) | Description |
|---|---|---|---|---|---|---|
| PicoDet-S_layout_3cls | Inference Model/Training Model | 88.2 | 8.99 / 2.22 | 16.11 / 8.73 | 4.8 | A high-efficiency layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using the lightweight PicoDet-S model |
| PicoDet-L_layout_3cls | Inference Model/Training Model | 89.0 | 13.05 / 4.50 | 41.30 / 41.30 | 22.6 | A layout region localization model with balanced efficiency and accuracy, trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L |
| RT-DETR-H_layout_3cls | Inference Model/Training Model | 95.8 | 114.93 / 27.71 | 947.56 / 947.56 | 470.1 | A high-precision layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H |
| Model | Model Download Link | mAP(0.5) (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) | Description |
|---|---|---|---|---|---|---|
| PicoDet_layout_1x | Inference Model/Training Model | 97.8 | 9.03 / 3.10 | 25.82 / 20.70 | 7.4 | A high-efficiency English document layout region localization model trained on the PubLayNet dataset using PicoDet-1x |
| Model | Model Download Link | mAP(0.5) (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) | Description |
|---|---|---|---|---|---|---|
| PicoDet-S_layout_17cls | Inference Model/Training Model | 87.4 | 9.11 / 2.12 | 15.42 / 9.12 | 4.8 | A high-efficiency layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using the lightweight PicoDet-S model |
| PicoDet-L_layout_17cls | Inference Model/Training Model | 89.0 | 13.50 / 4.69 | 43.32 / 43.32 | 22.6 | A layout region localization model with balanced efficiency and accuracy, trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L |
| RT-DETR-H_layout_17cls | Inference Model/Training Model | 98.3 | 115.29 / 104.09 | 995.27 / 995.27 | 470.2 | A high-precision layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H |
Text Image Correction Module Model (Optional):
| Model | Model Download Link | MS-SSIM (%) | Model Storage Size (M) | Description |
|---|---|---|---|---|
| UVDoc | Inference Model/Training Model | 54.40 | 30.3 M | High-precision text image correction model |
The accuracy metrics of the model are measured from the DocUNet benchmark.
Document Image Orientation Classification Module Model (Optional):
| Model | Model Download Link | Top-1 Acc (%) | CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) | Description |
|---|---|---|---|---|---|---|
| PP-LCNet_x1_0_doc_ori | Inference Model/Training Model | 99.06 | 2.31 / 0.43 | 3.37 / 1.27 | 7 | Document image classification model based on PP-LCNet_x1_0, containing four categories: 0 degrees, 90 degrees, 180 degrees, 270 degrees |
All model pipelines provided by PaddleX can be quickly experienced. You can use the command line or Python locally to experience the effect of the General Table Recognition v2 Pipeline.
Online experience is not supported at the moment.
Before using the General Table Recognition v2 Pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the PaddleX Local Installation Tutorial.
You can quickly experience the effect of the table recognition pipeline with one command. Use the test file, and replace --input with the local path for prediction.
paddlex --pipeline table_recognition_v2 \
--input table_recognition.jpg \
--save_path ./output \
--device gpu:0
The relevant parameter descriptions can be referred to in the 2.2 Integration via Python Script for parameter descriptions.
18822289e+00, 1.46489874e-01, 5.46996138e+02, 3.08782365e+01]), array([ 3.21032453, 31.1510637 , 110.20750237, 65.14108063]), array([110.18174553, 31.13076188, 213.00813103, 65.02860047]), array([212.96108818, 31.09959008, 404.19618034, 64.99535157]), array([404.08112907, 31.18304802, 547.00864983, 65.0847223 ]), array([ 3.21772957, 65.0738733 , 110.33685875, 96.07921387]), array([110.23703575, 65.02486207, 213.08839226, 96.01378419]), array([213.06095695, 64.96230103, 404.28425407, 95.97141816]), array([404.23704338, 65.04879548, 547.01273918, 96.03654267]), array([ 3.22793937, 96.08334137, 110.38572502, 127.08698823]), array([110.40586662, 96.10539795, 213.19943047, 127.07002045]), array([213.12627983, 96.0539148 , 404.42686272, 127.02842499]), array([404.33042717, 96.07251526, 547.01273918, 126.45088746])], 'pred_html': '| CRuncover | |||
| Dres | 连续工作3 | 取出来放在网上 没想 | 江、整江等八大 |
| Abstr | rSrivi | $709. | |
| cludingGiv | 2.72 | Ingcubic | $744.78 |
The above command line is for a quick experience to view the effect. Generally, in a project, integration through code is often required. You can complete the pipeline's fast inference with just a few lines of code. The inference code is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="table_recognition_v2")
output = pipeline.predict(
input="table_recognition.jpg",
use_doc_orientation_classify=False,
use_doc_unwarping=False,
)
for res in output:
res.print()
res.save_to_img("./output/")
res.save_to_xlsx("./output/")
res.save_to_html("./output/")
res.save_to_json("./output/")
In the above Python script, the following steps are executed:
(1) The create_pipeline() function is used to instantiate a General Table Recognition v2 Pipeline object. The specific parameter descriptions are as follows:
| Parameter | Description | Type | Default Value |
|---|---|---|---|
pipeline |
The name of the pipeline or the path to the pipeline configuration file. If it is a pipeline name, it must be a pipeline supported by PaddleX. | str |
None |
config |
Specific configuration information for the pipeline (if set simultaneously with pipeline, it has higher priority than pipeline, and the pipeline name must be consistent with pipeline). |
dict[str, Any] |
None |
device |
The device used for pipeline inference. It supports specifying specific GPU card numbers, such as "gpu:0", specific card numbers for other hardware, such as "npu:0", or CPU as "cpu". | str |
gpu:0 |
use_hpip |
Whether to enable high-performance inference. This is only available if the pipeline supports high-performance inference. | bool |
False |
(2) Call the predict() method of the General Table Recognition v2 Pipeline object for inference prediction. This method will return a generator. The parameters of the predict() method and their descriptions are as follows:
| Parameter | Description | Type | Options | Default Value |
|---|---|---|---|---|
input |
Data to be predicted, supports multiple input types, required | Python Var|str|list |
|
None |
device |
Pipeline inference device | str|None |
|
None |
use_doc_orientation_classify |
Whether to use the document orientation classification module | bool|None |
|
None |
use_doc_unwarping |
Whether to use the document unwarping module | bool|None |
|
None |
text_det_limit_side_len |
Image side length limit for text detection | int|None |
|
None |
text_det_limit_type |
Image side length limit type for text detection | str|None |
|
None |
text_det_thresh |
Detection pixel threshold, only pixels with scores greater than this threshold in the output probability map will be considered as text pixels | float|None |
|
None |
text_det_box_thresh |
Detection box threshold, the result will be considered as a text region if the average score of all pixels within the detection box is greater than this threshold | float|None |
|
None |
text_det_unclip_ratio |
Text detection expansion ratio, this method is used to expand the text region, the larger the value, the larger the expanded area | float|None |
|
None |
text_rec_score_thresh |
Text recognition threshold, text results with scores greater than this threshold will be retained | float|None |
|
None |
use_layout_detection |
Whether to use the layout detection module | bool|None |
|
None |
layout_threshold |
Confidence threshold for layout detection; only scores above this threshold will be output | float|dict|None |
|
None |
layout_nms |
Whether to use NMS post-processing for layout detection | bool|None |
|
None |
layout_unclip_ratio |
Scale factor for the side length of detection boxes; if not specified, the default PaddleX official model configuration will be used | float|list|None |
|
|
layout_merge_bboxes_mode |
Merging mode for detection boxes output by the model; if not specified, the default PaddleX official model configuration will be used | string|None |
|
None |
(3) Process the prediction results. The prediction result for each sample is of type dict, and supports operations such as printing, saving as an image, saving as an xlsx file, saving as an HTML file, and saving as a json file:
| Method | Description | Parameter | Type | Parameter Description | Default Value |
|---|---|---|---|---|---|
print() |
Print the result to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. This is only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. This is only effective when format_json is True |
False |
||
save_to_json() |
Save the result as a JSON file | save_path |
str |
The file path for saving. When it is a directory, the saved file name will match the input file name | N/A |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. This is only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. This is only effective when format_json is True |
False |
||
save_to_img() |
Save the result as an image file | save_path |
str |
The file path for saving, supporting both directory and file paths | N/A |
save_to_xlsx() |
Save the result as an xlsx file | save_path |
str |
The file path for saving, supporting both directory and file paths | N/A |
save_to_html() |
Save the result as an HTML file | save_path |
str |
The file path for saving, supporting both directory and file paths | N/A |
Calling the print() method will print the result to the terminal, with the printed content explained as follows:
input_path: (str) The input path of the image to be predicted
model_settings: (Dict[str, bool]) Configuration parameters required for the pipeline model
use_doc_preprocessor: (bool) Controls whether to enable the document preprocessing sub-pipelineuse_layout_detection: (bool) Controls whether to enable the layout detection sub-pipeline
use_ocr_model: (bool) Controls whether to enable the OCR sub-pipeline
layout_det_res: (Dict[str, Union[List[numpy.ndarray], List[float]]]) Output result of the layout detection sub-module. Only exists when use_layout_detection=True
input_path: (Union[str, None]) The image path accepted by the layout detection module, saved as None when the input is a numpy.ndarraypage_index: (Union[int, None]) If the input is a PDF file, it indicates the current page number of the PDF, otherwise it is Noneboxes: (List[Dict]) List of detection boxes for layout seal regions, each element in the list contains the following fields
cls_id: (int) The class ID of the detection boxscore: (float) The confidence score of the detection boxcoordinate: (List[float]) The coordinates of the four corners of the detection box, in the order of x1, y1, x2, y2, representing the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right cornerdoc_preprocessor_res: (Dict[str, Union[str, Dict[str, bool], int]]) The output result of the document preprocessing sub-pipeline. Exists only when use_doc_preprocessor=True
input_path: (Union[str, None]) The image path accepted by the image preprocessing sub-pipeline, saved as None when the input is numpy.ndarraymodel_settings: (Dict) Model configuration parameters for the preprocessing sub-pipeline
use_doc_orientation_classify: (bool) Controls whether to enable document orientation classificationuse_doc_unwarping: (bool) Controls whether to enable document unwarpingangle: (int) The prediction result of document orientation classification. When enabled, the values are [0,1,2,3], corresponding to [0°,90°,180°,270°] respectively; when not enabled, it is -1dt_polys: (List[numpy.ndarray]) List of polygons for text detection. Each detection box is represented by a numpy array of 4 vertex coordinates, with the array shape being (4, 2) and the data type being int16
dt_scores: (List[float]) List of confidence scores for text detection boxes
text_det_params: (Dict[str, Dict[str, int, float]]) Configuration parameters for the text detection module
limit_side_len: (int) The side length limit value during image preprocessinglimit_type: (str) The processing method for the side length limitthresh: (float) Confidence threshold for text pixel classificationbox_thresh: (float) Confidence threshold for text detection boxesunclip_ratio: (float) Expansion coefficient for text detection boxestext_type: (str) Type of text detection, currently fixed as "general"text_rec_score_thresh: (float) Filtering threshold for text recognition results
rec_texts: (List[str]) List of text recognition results, only includes texts with confidence scores exceeding text_rec_score_thresh
rec_scores: (List[float]) List of confidence scores for text recognition, filtered by text_rec_score_thresh
rec_polys: (List[numpy.ndarray]) List of text detection boxes after confidence filtering, same format as dt_polys
rec_boxes: (numpy.ndarray) Array of rectangular bounding boxes for detection boxes, with shape (n, 4) and dtype int16. Each row represents the [x_min, y_min, x_max, y_max] coordinates of a rectangle, where (x_min, y_min) is the top-left coordinate and (x_max, y_max) is the bottom-right coordinate
Calling the save_to_json() method will save the above content to the specified save_path. If specified as a directory, the saved path will be save_path/{your_img_basename}.json; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, the numpy.array types will be converted to lists.
Calling the save_to_img() method will save the visualization results to the specified save_path. If specified as a directory, the saved path will be save_path/{your_img_basename}_ocr_res_img.{your_img_extension}; if specified as a file, it will be saved directly to that file. (The pipeline usually contains many result images, it is not recommended to specify a specific file path directly, otherwise multiple images will be overwritten, leaving only the last image)
Calling the save_to_html() method will save the above content to the specified save_path. If specified as a directory, the saved path will be save_path/{your_img_basename}.html; if specified as a file, it will be saved directly to that file. In the General Table Recognition v2 Pipeline, the HTML form of the table in the image will be written to the specified HTML file.
Calling the save_to_xlsx() method will save the above content to the specified save_path. If specified as a directory, the saved path will be save_path/{your_img_basename}.xlsx; if specified as a file, it will be saved directly to that file. In the General Table Recognition v2 Pipeline, the Excel form of the table in the image will be written to the specified XLSX file.
Additionally, it also supports obtaining visualized images and prediction results through attributes, as follows:
| Attribute | Attribute Description |
|---|---|
json |
Get the predicted json format result |
img |
Get the visualized image in dict format |
json attribute is a dict type of data, with content consistent with the content saved by calling the save_to_json() method.img attribute is a dictionary type of data. The keys are table_res_img, ocr_res_img, layout_res_img, and preprocessed_img, and the corresponding values are four Image.Image objects, in order: visualized image of table recognition result, visualized image of OCR result, visualized image of layout region detection result, and visualized image of image preprocessing. If a sub-module is not used, the corresponding result image is not included in the dictionary.In addition, you can obtain the General Table Recognition v2 Pipeline configuration file and load the configuration file for prediction. You can execute the following command to save the result in my_path:
paddlex --get_pipeline_config table_recognition_v2 --save_path ./my_path
If you have obtained the configuration file, you can customize the settings for the General Table Recognition v2 Pipeline. Simply modify the pipeline parameter value in the create_pipeline method to the path of the pipeline configuration file. The example is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/table_recognition_v2.yaml")
output = pipeline.predict(
input="table_recognition.jpg",
use_doc_orientation_classify=False,
use_doc_unwarping=False,
)
for res in output:
res.print()
res.save_to_img("./output/")
res.save_to_xlsx("./output/")
res.save_to_html("./output/")
res.save_to_json("./output/")
Note: The parameters in the configuration file are the initialization parameters for the pipeline. If you want to change the initialization parameters of the General Table Recognition v2 Pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in the configuration file by specifying the path with --pipeline.
If the pipeline meets your requirements for inference speed and accuracy, you can proceed with development integration/deployment.
If you need to directly apply the pipeline in your Python project, you can refer to the example code in 2.2 Integration via Python Script.
In addition, PaddleX also provides three other deployment methods, detailed as follows:
🚀 High-Performance Inference: In actual production environments, many applications have stringent performance requirements (especially response speed) for deployment strategies to ensure efficient system operation and smooth user experience. Therefore, PaddleX provides a high-performance inference plugin designed to deeply optimize the performance of model inference and pre/post-processing, significantly speeding up the end-to-end process. For detailed high-performance inference procedures, please refer to the PaddleX High-Performance Inference Guide.
☁️ Service Deployment: Service deployment is a common form of deployment in actual production environments. By encapsulating inference functions as services, clients can access these services via network requests to obtain inference results. PaddleX supports multiple pipeline service deployment solutions. For detailed pipeline service deployment procedures, please refer to the PaddleX Service Deployment Guide.
Below are the API references and multi-language service call examples for basic service deployment:
For the main operations provided by the service: The main operations provided by the service are as follows: Locate and recognize tables in the image. Each element in API Reference
200, and the properties of the response body are as follows:
Name
Type
Meaning
logIdstringThe UUID of the request.
errorCodeintegerError code. Fixed as
0.
errorMsgstringError message. Fixed as
"Success".
resultobjectThe result of the operation.
Name
Type
Meaning
logIdstringThe UUID of the request.
errorCodeintegerError code. Same as the response status code.
errorMsgstringError message.
inferPOST /table-recognition
Name
Type
Meaning
Required
filestringThe URL of an image file or PDF file accessible by the server, or the Base64-encoded content of the above file types. For PDF files with more than 10 pages, only the content of the first 10 pages will be used.
Yes
fileTypeinteger | nullThe type of the file.
0 indicates a PDF file, and 1 indicates an image file. If this attribute is not present in the request body, the file type will be inferred based on the URL.No
useDocOrientationClassifyboolean | nullSee the
use_doc_orientation_classify parameter description in the production predict method.No
useDocUnwarpingboolean | nullSee the
use_doc_unwarping parameter description in the production predict method.No
useLayoutDetectionboolean | nullSee the
use_layout_detection parameter description in the production predict method.No
useOcrModelboolean | nullSee the
use_ocr_model parameter description in the production predict method.No
layoutThresholdnumber | nullSee the
layout_threshold parameter description in the production predict method.No
layoutNmsboolean | nullSee the description of the
layout_nms parameter in the production predict method.No
layoutUnclipRationumber | array | nullSee the description of the
layout_unclip_ratio parameter in the production predict method.No
layoutMergeBboxesModestring | nullSee the description of the
layout_merge_bboxes_mode parameter in the production predict method.No
textDetLimitSideLeninteger | nullSee the description of the
text_det_limit_side_len parameter in the production predict method.No
textDetLimitTypestring | nullSee the description of the
text_det_limit_type parameter in the production predict method.No
textDetThreshnumber | nullSee the description of the
text_det_thresh parameter in the production predict method.No
textDetBoxThreshnumber | nullSee the description of the
text_det_box_thresh parameter in the production predict method.No
textDetUnclipRationumber | nullSee the description of the
text_det_unclip_ratio parameter in the production predict method.No
textRecScoreThreshnumber | nullSee the description of the
text_rec_score_thresh parameter in the production predict method.No
tableRecResults is an object with the following properties:
Name
Type
Description
prunedResultobjectA simplified version of the
res field in the JSON representation generated by the predict method of the pipeline object, with the input_path field removed.
outputImagesobject | nullSee the description of the
img attribute in the result of the pipeline prediction. Images are in JPEG format and encoded with Base64.
inputImagestring | nullThe input image. The image is in JPEG format and encoded with Base64.
Multi-language service call example
Python
import base64
import requests
API_URL = "http://localhost:8080/table-recognition" file_path = "./demo.jpg"
with open(file_path, "rb") as file:
file_bytes = file.read()
file_data = base64.b64encode(file_bytes).decode("ascii")
payload = {"file": file_data, "fileType": 1}
response = requests.post(API_URL, json=payload)
assert response.status_code == 200 result = response.json()["result"] for i, res in enumerate(result["tableRecResults"]):
print("Detected tables:")
print(res["tables"])
layout_img_path = f"layout_{i}.jpg"
with open(layout_img_path, "wb") as f:
f.write(base64.b64decode(res["layoutImage"]))
ocr_img_path = f"ocr_{i}.jpg"
with open(ocr_img_path, "wb") as f:
f.write(base64.b64decode(res["ocrImage"]))
print(f"Output images saved at {layout_img_path} and {ocr_img_path}")
📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities directly on user devices, allowing them to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate deployment method based on your needs to integrate the model pipeline into subsequent AI applications.
If the default model weights provided by the General Table Recognition v2 Pipeline do not meet your requirements in terms of accuracy or speed, you can try to further fine-tune the existing models using your own domain-specific or application data to improve the recognition performance of the General Table Recognition v2 Pipeline in your specific scenario.
Since the General Table Recognition v2 Pipeline consists of several modules, if the overall performance is not satisfactory, the issue may lie in any one of these modules. You can analyze the images with poor recognition results to identify which module is problematic and refer to the corresponding fine-tuning tutorial links in the table below.
| Scenario | Fine-Tuning Module | Fine-Tuning Reference Link |
|---|---|---|
| Table classification errors | Table Classification Module | Link |
| Table cell localization errors | Table Cell Detection Module | Link |
| Table structure recognition errors | Table Structure Recognition Module | Link |
| Failure to detect table regions | Layout Region Detection Module | Link |
| Missing text detection | Text Detection Module | Link |
| Inaccurate text content | Text Recognition Module | Link |
| Inaccurate whole-image rotation correction | Document Image Orientation Classification Module | Link |
| Inaccurate image distortion correction | Text Image Correction Module | Fine-tuning not supported |
After fine-tuning with your private dataset, you can obtain the local model weight file.
To use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the local path of the fine-tuned model weights in the corresponding position in the configuration file:
SubModules:
LayoutDetection:
module_name: layout_detection
model_name: PicoDet_layout_1x_table
model_dir: null # 替换为微调后的版面区域检测模型权重路径
TableClassification:
module_name: table_classification
model_name: PP-LCNet_x1_0_table_cls
model_dir: null # 替换为微调后的表格分类模型权重路径
WiredTableStructureRecognition:
module_name: table_structure_recognition
model_name: SLANeXt_wired
model_dir: null # 替换为微调后的有线表格结构识别模型权重路径
WirelessTableStructureRecognition:
module_name: table_structure_recognition
model_name: SLANeXt_wireless
model_dir: null # 替换为微调后的无线表格结构识别模型权重路径
WiredTableCellsDetection:
module_name: table_cells_detection
model_name: RT-DETR-L_wired_table_cell_det
model_dir: null # 替换为微调后的有线表格单元格检测模型权重路径
WirelessTableCellsDetection:
module_name: table_cells_detection
model_name: RT-DETR-L_wireless_table_cell_det
model_dir: null # 替换为微调后的无线表格单元格检测模型权重路径
SubPipelines:
DocPreprocessor:
pipeline_name: doc_preprocessor
use_doc_orientation_classify: True
use_doc_unwarping: True
SubModules:
DocOrientationClassify:
module_name: doc_text_orientation
model_name: PP-LCNet_x1_0_doc_ori
model_dir: null # 替换为微调后的文档图像方向分类模型权重路径
DocUnwarping:
module_name: image_unwarping
model_name: UVDoc
model_dir: null
GeneralOCR:
pipeline_name: OCR
text_type: general
use_doc_preprocessor: False
use_textline_orientation: False
SubModules:
TextDetection:
module_name: text_detection
model_name: PP-OCRv4_server_det
model_dir: null # 替换为微调后的文本检测模型权重路径
limit_side_len: 960
limit_type: max
thresh: 0.3
box_thresh: 0.6
unclip_ratio: 2.0
TextRecognition:
module_name: text_recognition
model_name: PP-OCRv4_server_rec
model_dir: null # 替换为微调后文本识别的模型权重路径
batch_size: 1
score_thresh: 0
Subsequently, refer to the command line method or Python script method in 2.2 Local Experience to load the modified pipeline configuration file.
PaddleX supports various mainstream hardware devices such as NVIDIA GPU, Kunlun Chip XPU, Ascend NPU, and Cambricon MLU. Simply modify the --device parameter to achieve seamless switching between different hardware.
For example, if you use Ascend NPU for OCR pipeline inference, the Python command used is:
paddlex --pipeline table_recognition_v2 \
--input table_recognition.jpg \
--save_path ./output \
--device npu:0
If you want to use the General Table Recognition v2 Pipeline on a wider variety of hardware, please refer to the PaddleX Multi-Hardware Usage Guide.