--- comments: true --- # General Table Recognition v2 Pipeline Tutorial ## 1. Introduction to General Table Recognition v2 Pipeline Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in data entry, information retrieval, and document analysis. By using computer vision and machine learning algorithms, table recognition can convert complex table information into an editable format, making it easier for users to further process and analyze data. The General Table Recognition v2 Pipeline (PP-TableMagic) is designed to solve table recognition tasks by identifying tables in images and outputting them in HTML format. Unlike the General Table Recognition Pipeline, this pipeline introduces two additional modules: table classification and table cell detection, which are linked with the table structure recognition module to complete the table recognition task. This pipeline can achieve accurate table predictions and is applicable in various fields such as general, manufacturing, finance, and transportation. It also provides flexible service deployment options, supporting multiple programming languages on various hardware. Additionally, it offers custom development capabilities, allowing you to train and fine-tune models on your own dataset, with seamless integration of the trained models. In addition, the General Table Recognition v2 Pipeline also supports the use of end-to-end table structure recognition models (e.g. SLANet, SLANet_plus, etc.), and supports independent configuration of table recognition for wired and wireless table, allowing developers to freely select and combine the best table recognition solutions. The General Table Recognition v2 Pipeline includes mandatory modules such as table structure recognition, table classification, table cell localization, text detection, and text recognition, as well as optional modules like layout area detection, document image orientation classification, and text image correction. If you prioritize model accuracy, choose a model with higher accuracy; if you care more about inference speed, choose a model with faster inference speed; if you are concerned about model storage size, choose a model with a smaller storage size.
👉Model List Details

Table Structure Recognition Module Models:

ModelModel Download Link Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
SLANeXt_wired Inference Model/Training Model 69.65 -- -- 351M The SLANeXt series are the latest table structure recognition models developed by the PaddlePaddle Vision Team. Compared to SLANet and SLANet_plus, SLANeXt focuses on table structure recognition and has dedicated weights trained for wired and wireless tables, significantly improving the recognition ability for both types, especially for wired tables.
SLANeXt_wireless Inference Model/Training Model

Table Classification Module Models:

ModelModel Download Link Top1 Acc(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M)
PP-LCNet_x1_0_table_clsInference Model/Training Model 94.2 2.35 / 0.47 4.03 / 1.35 6.6M

Table Cell Detection Module Models:

ModelModel Download Link mAP(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
RT-DETR-L_wired_table_cell_det Inference Model/Training Model 82.7 35.00 / 10.45 495.51 / 495.51 124M RT-DETR is the first real-time end-to-end object detection model. The PaddlePaddle Vision Team used RT-DETR-L as the base model and pre-trained it on a self-built table cell detection dataset, achieving good performance for both wired and wireless table cell detection.
RT-DETR-L_wireless_table_cell_det Inference Model/Training Model

Text Detection Module Models:

ModelModel Download Link Detection Hmean (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-OCRv4_server_detInference Model/Training Model 82.69 83.34 / 80.91 442.58 / 442.58 109 The server-side text detection model of PP-OCRv4, with higher precision, suitable for deployment on high-performance servers.
PP-OCRv4_mobile_detInference Model/Training Model 77.79 8.79 / 3.13 51.00 / 28.58 4.7 The mobile text detection model of PP-OCRv4, with higher efficiency, suitable for deployment on edge devices.

Text Recognition Module Models:

ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-OCRv4_server_rec_docInference Model/Training Model 81.53 6.65 / 2.38 32.92 / 32.92 74.7 M PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data based on PP-OCRv4_server_rec. It has added the ability to recognize some traditional Chinese characters, Japanese, and special characters, and can support the recognition of more than 15,000 characters. In addition to improving the text recognition capability related to documents, it also enhances the general text recognition capability.
PP-OCRv4_mobile_recInference Model/Training Model 78.74 4.82 / 1.20 16.74 / 4.64 10.6 M The lightweight recognition model of PP-OCRv4 has high inference efficiency and can be deployed on various hardware devices, including edge devices.
PP-OCRv4_server_recInference Model/Training Model 80.61 6.58 / 2.43 33.17 / 33.17 71.2 M The server-side model of PP-OCRv4 offers high inference accuracy and can be deployed on various types of servers.
en_PP-OCRv4_mobile_recInference Model/Training Model 70.39 4.81 / 0.75 16.10 / 5.31 6.8 M The ultra-lightweight English recognition model, trained based on the PP-OCRv4 recognition model, supports the recognition of English letters and numbers.
> ❗ The above list features the 4 core models that the text recognition module primarily supports. In total, this module supports 18 models. The complete list of models is as follows:
👉Model List Details * Chinese Recognition Model
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-OCRv4_server_rec_docInference Model/Training Model 81.53 6.65 / 2.38 32.92 / 32.92 74.7 M PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data based on PP-OCRv4_server_rec. It has added the recognition capabilities for some traditional Chinese characters, Japanese, and special characters. The number of recognizable characters is over 15,000. In addition to the improvement in document-related text recognition, it also enhances the general text recognition capability.
PP-OCRv4_mobile_recInference Model/Training Model 78.74 4.82 / 1.20 16.74 / 4.64 10.6 M The lightweight recognition model of PP-OCRv4 has high inference efficiency and can be deployed on various hardware devices, including edge devices.
PP-OCRv4_server_rec Inference Model/Training Model 80.61 6.58 / 2.43 33.17 / 33.17 71.2 M The server-side model of PP-OCRv4 offers high inference accuracy and can be deployed on various types of servers.
PP-OCRv3_mobile_recInference Model/Training Model 72.96 5.87 / 1.19 9.07 / 4.28 9.2 M PP-OCRv3’s lightweight recognition model is designed for high inference efficiency and can be deployed on a variety of hardware devices, including edge devices.
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
ch_SVTRv2_recInference Model/Training Model 68.81 8.08 / 2.74 50.17 / 42.50 73.9 M SVTRv2 is a server text recognition model developed by the OpenOCR team of Fudan University's Visual and Learning Laboratory (FVL). It won the first prize in the PaddleOCR Algorithm Model Challenge - Task One: OCR End-to-End Recognition Task. The end-to-end recognition accuracy on the A list is 6% higher than that of PP-OCRv4.
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
ch_RepSVTR_recInference Model/Training Model ch_RepSVTR_recInference Model/Training Model 65.07 5.93 / 1.62 20.73 / 7.32 22.1 M The RepSVTR text recognition model is a mobile text recognition model based on SVTRv2. It won the first prize in the PaddleOCR Algorithm Model Challenge - Task One: OCR End-to-End Recognition Task. The end-to-end recognition accuracy on the B list is 2.5% higher than that of PP-OCRv4, with the same inference speed.
* English Recognition Model
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
en_PP-OCRv4_mobile_recInference Model/Training Model 70.39 4.81 / 0.75 16.10 / 5.31 6.8 M The ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model supports the recognition of English and numbers.
en_PP-OCRv3_mobile_recInference Model/Training Model 70.69 5.44 / 0.75 8.65 / 5.57 7.8 M The ultra-lightweight English recognition model trained based on the PP-OCRv3 recognition model supports the recognition of English and numbers.
* Multilingual Recognition Model
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
korean_PP-OCRv3_mobile_recInference Model/Training Model 60.21 5.40 / 0.97 9.11 / 4.05 8.6 M The ultra-lightweight Korean recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Korean and numbers.
japan_PP-OCRv3_mobile_recInference Model/Training Model 45.69 5.70 / 1.02 8.48 / 4.07 8.8 M The ultra-lightweight Japanese recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Japanese and numbers.
chinese_cht_PP-OCRv3_mobile_recInference Model/Training Model 82.06 5.90 / 1.28 9.28 / 4.34 9.7 M The ultra-lightweight Traditional Chinese recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Traditional Chinese and numbers.
te_PP-OCRv3_mobile_recInference Model/Training Model 95.88 5.42 / 0.82 8.10 / 6.91 7.8 M The ultra-lightweight Telugu recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Telugu and numbers.
ka_PP-OCRv3_mobile_recInference Model/Training Model 96.96 5.25 / 0.79 9.09 / 3.86 8.0 M The ultra-lightweight Kannada recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Kannada and numbers.
ta_PP-OCRv3_mobile_recInference Model/Training Model 76.83 5.23 / 0.75 10.13 / 4.30 8.0 M The ultra-lightweight Tamil recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Tamil and numbers.
latin_PP-OCRv3_mobile_recInference Model/Training Model 76.93 5.20 / 0.79 8.83 / 7.15 7.8 M The ultra-lightweight Latin recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Latin script and numbers.
arabic_PP-OCRv3_mobile_recInference Model/Training Model 73.55 5.35 / 0.79 8.80 / 4.56 7.8 M The ultra-lightweight Arabic script recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Arabic script and numbers.
cyrillic_PP-OCRv3_mobile_recInference Model/Training Model 94.28 5.23 / 0.76 8.89 / 3.88 7.9 M The ultra-lightweight cyrillic alphabet recognition model trained based on the PP-OCRv3 recognition model supports the recognition of cyrillic letters and numbers.
devanagari_PP-OCRv3_mobile_recInference Model/Training Model 96.44 5.22 / 0.79 8.56 / 4.06 7.9 M The ultra-lightweight Devanagari script recognition model trained based on the PP-OCRv3 recognition model supports the recognition of Devanagari script and numbers.

Layout Region Detection Module Models (Optional):

ModelModel Download Link mAP(0.5)(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-DocLayout-LInference Model/Training Model 90.4 34.6244 / 10.3945 510.57 / - 123.76 M A high-precision layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on RT-DETR-L.
PP-DocLayout-MInference Model/Training Model 75.2 13.3259 / 4.8685 44.0680 / 44.0680 22.578 A layout region localization model with balanced accuracy and efficiency, trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on PicoDet-L.
PP-DocLayout-SInference Model/Training Model 70.9 8.3008 / 2.3794 10.0623 / 9.9296 4.834 A high-efficiency layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on PicoDet-S.
> ❗ The above list includes the 3 core models that are the focus of the layout detection module. The module supports a total of 11 full models, including multiple predefined models with different categories. The complete list of models is as follows:
👉Model List Details * Table Layout Detection Models
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet_layout_1x_tableInference Model/Training Model 97.5 8.02 / 3.09 23.70 / 20.41 7.4 M A high-efficiency layout area localization model trained on a self-built dataset based on PicoDet-1x, capable of locating table areas.
* 3-Class Layout Detection Model, Including Tables, Images, and Stamps
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet-S_layout_3clsInference Model/Training Model 88.2 8.99 / 2.22 16.11 / 8.73 4.8 A high-efficiency layout area localization model trained on a self-built dataset based on PicoDet-S lightweight model, suitable for Chinese and English papers, magazines, and research reports.
PicoDet-L_layout_3clsInference Model/Training Model 89.0 13.05 / 4.50 41.30 / 41.30 22.6 A balanced efficiency and accuracy layout area localization model trained on a self-built dataset based on PicoDet-L, suitable for Chinese and English papers, magazines, and research reports.
RT-DETR-H_layout_3clsInference Model/Training Model 95.8 114.93 / 27.71 947.56 / 947.56 470.1 A high-precision layout area localization model trained on a self-built dataset based on RT-DETR-H, suitable for Chinese and English papers, magazines, and research reports.
* 5-Class English Document Area Detection Model, Including Text, Titles, Tables, Images, and Lists
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet_layout_1xInference Model/Training Model 97.8 9.03 / 3.10 25.82 / 20.70 7.4 A high-efficiency English document layout area localization model trained on the PubLayNet dataset based on PicoDet-1x.
17-class layout detection model, covering 17 common categories, including: paragraph title, image, text, number, abstract, content, chart title, formula, table, table title, reference, document title, footnote, header, algorithm, footer, seal
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet-S_layout_17clsInference Model/Training Model 87.4 9.11 / 2.12 15.42 / 9.12 4.8 A high-efficiency layout region detection model trained on a self-built dataset for Chinese and English papers, magazines, and research reports based on the lightweight PicoDet-S model
PicoDet-L_layout_17clsInference Model/Training Model 89.0 13.50 / 4.69 43.32 / 43.32 22.6 A balanced efficiency and accuracy layout region detection model trained on a self-built dataset for Chinese and English papers, magazines, and research reports based on PicoDet-L
RT-DETR-H_layout_17clsInference Model/Training Model 98.3 115.29 / 104.09 995.27 / 995.27 470.2 A high-precision layout region detection model trained on a self-built dataset for Chinese and English papers, magazines, and research reports based on RT-DETR-H

Text Image Correction Module Model (Optional):

ModelModel Download Link MS-SSIM (%) Model Storage Size (M) Introduction
UVDocInference Model/Training Model 54.40 30.3 M A high-precision text image rectification model

Document Image Orientation Classification Module Model (Optional):

ModelModel Download Link Top-1 Acc (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-LCNet_x1_0_doc_oriInference Model/Training Model 99.06 2.31 / 0.43 3.37 / 1.27 7 A document image classification model based on PP-LCNet_x1_0, with four categories: 0 degrees, 90 degrees, 180 degrees, and 270 degrees
Test Environment Description:
Mode GPU Configuration CPU Configuration Acceleration Technology Combination
Normal Mode FP32 Precision / No TRT Acceleration FP32 Precision / 8 Threads PaddleInference
High-Performance Mode Optimal combination of pre-selected precision types and acceleration strategies FP32 Precision / 8 Threads Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)
## 2. Quick Start All model pipelines provided by PaddleX can be quickly experienced. You can use the command line or Python locally to experience the effect of the General Table Recognition v2 Pipeline. ### 2.1 Online Experience Online experience is not supported at the moment. ### 2.2 Local Experience Before using the General Table Recognition v2 Pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Local Installation Tutorial](../../../installation/installation.en.md). ### 2.3 Command Line Experience You can quickly experience the table recognition pipeline with a single command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition_v2.jpg) and replace `--input` with the local path for prediction. ```bash paddlex --pipeline table_recognition_v2 \ --use_doc_orientation_classify=False \ --use_doc_unwarping=False \ --input table_recognition_v2.jpg \ --save_path ./output \ --device gpu:0 ```
👉 After running, the result obtained is: (Click to expand) ``` {'res': {'input_path': 'table_recognition_v2.jpg', 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_layout_detection': True, 'use_ocr_model': True}, 'layout_det_res': {'input_path': None, 'page_index': None, 'boxes': [{'cls_id': 8, 'label': 'table', 'score': 0.86655592918396, 'coordinate': [0.0125130415, 0.41920784, 1281.3737, 585.3884]}]}, 'overall_ocr_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_textline_orientation': False}, 'dt_polys': array([[[ 9, 21], ..., [ 9, 59]], ..., [[1046, 536], ..., [1046, 573]]], dtype=int16), 'text_det_params': {'limit_side_len': 960, 'limit_type': 'max', 'thresh': 0.3, 'box_thresh': 0.6, 'unclip_ratio': 2.0}, 'text_type': 'general', 'textline_orientation_angles': array([-1, ..., -1]), 'text_rec_score_thresh': 0, 'rec_texts': ['部门', '报销人', '报销事由', '批准人:', '单据', '张', '合计金额', '元', '车费票', '其', '火车费票', '飞机票', '中', '旅住宿费', '其他', '补贴'], 'rec_scores': array([0.99958128, ..., 0.99317062]), 'rec_polys': array([[[ 9, 21], ..., [ 9, 59]], ..., [[1046, 536], ..., [1046, 573]]], dtype=int16), 'rec_boxes': array([[ 9, ..., 59], ..., [1046, ..., 573]], dtype=int16)}, 'table_res_list': [{'cell_box_list': [array([ 0.13052222, ..., 73.08310249]), array([104.43082511, ..., 73.27777413]), array([319.39041221, ..., 73.30439308]), array([424.2436837 , ..., 73.44736794]), array([580.75836265, ..., 73.24003914]), array([723.04370201, ..., 73.22717598]), array([984.67315757, ..., 73.20420387]), array([1.25130415e-02, ..., 5.85419208e+02]), array([984.37072837, ..., 137.02281502]), array([984.26586998, ..., 201.22290352]), array([984.24017417, ..., 585.30775765]), array([1039.90606773, ..., 265.44664314]), array([1039.69549644, ..., 329.30540779]), array([1039.66546714, ..., 393.57319954]), array([1039.5122689 , ..., 457.74644783]), array([1039.55535972, ..., 521.73030403]), array([1039.58612144, ..., 585.09468392])], 'pred_html': '
部门报销人报销事由批准人:
单据 张
合计金额 元
其 中车费票
火车费票
飞机票
旅住宿费
其他
补贴
', 'table_ocr_pred': {'rec_polys': array([[[ 9, 21], ..., [ 9, 59]], ..., [[1046, 536], ..., [1046, 573]]], dtype=int16), 'rec_texts': ['部门', '报销人', '报销事由', '批准人:', '单据', '张', '合计金额', '元', '车费票', '其', '火车费票', '飞机票', '中', '旅住宿费', '其他', '补贴'], 'rec_scores': array([0.99958128, ..., 0.99317062]), 'rec_boxes': array([[ 9, ..., 59], ..., [1046, ..., 573]], dtype=int16)}}]}} ``` The explanation of the running result parameters can refer to the result interpretation in [2.2.2 Python Script Integration](#222-python-script-integration). The visualization results are saved under `save_path`, where the visualization result of table recognition is as follows:
### 2.2 Integration via Python Script * The above command line is for a quick experience to view the results. Generally, in a project, integration through code is often required. You can complete the pipeline's fast inference with just a few lines of code. The inference code is as follows: ```python from paddlex import create_pipeline pipeline = create_pipeline(pipeline="table_recognition_v2") output = pipeline.predict( input="table_recognition_v2.jpg", use_doc_orientation_classify=False, use_doc_unwarping=False, ) for res in output: res.print() res.save_to_img("./output/") res.save_to_xlsx("./output/") res.save_to_html("./output/") res.save_to_json("./output/") ``` In the above Python script, the following steps are executed: (1) The `create_pipeline()` function is used to instantiate a General Table Recognition v2 Pipeline object. The specific parameter descriptions are as follows:
Parameter Parameter Description Parameter Type Default Value
pipeline The name of the pipeline or the path to the pipeline configuration file. If it is a pipeline name, it must be a pipeline supported by PaddleX. str None
config The specific configuration information of the pipeline (if set simultaneously with pipeline, it has higher priority than pipeline, and the pipeline name must be consistent with pipeline). dict[str, Any] None
device The inference device for the pipeline. It supports specifying specific GPU card numbers, such as "gpu:0", specific card numbers for other hardware, such as "npu:0", and CPU like "cpu". str gpu:0
use_hpip Whether to enable high-performance inference, which is only available if the pipeline supports high-performance inference. bool False
(2) Call the `predict()` method of the General Table Recognition v2 Pipeline object for inference prediction. This method will return a `generator`. The parameters of the `predict()` method and their descriptions are as follows:
Parameter Description Type Options Default Value
input Data to be predicted, supports multiple input types, required. Python Var|str|list
  • Python Var: Image data represented by numpy.ndarray.
  • str: Local path of image or PDF files, e.g., /root/data/img.jpg; URL link, such as the network URL of an image or PDF file: Example; Local directory, the directory should contain images to be predicted, e.g., /root/data/ (currently, prediction for PDF files in directories is not supported; PDF files must specify the exact file path).
  • List: List elements must be of the above types, such as [numpy.ndarray, numpy.ndarray], [“/root/data/img1.jpg”, “/root/data/img2.jpg”], [“/root/data1”, “/root/data2”].
None
device Inference device. str|None
  • CPU: Use CPU for inference, e.g., cpu.
  • GPU: Use the first GPU for inference, e.g., gpu:0.
  • NPU: Use the first NPU for inference, e.g., npu:0.
  • XPU: Use the first XPU for inference, e.g., xpu:0.
  • MLU: Use the first MLU for inference, e.g., mlu:0.
  • DCU: Use the first DCU for inference, e.g., dcu:0.
  • None: If set to None, the default value initialized by the production line will be used. During initialization, the local GPU 0 will be prioritized; if unavailable, the CPU will be used.
None
use_doc_orientation_classify Whether to use the document orientation classification module. bool|None
  • bool: True or False.
  • None: If set to None, the default value initialized by the production line will be used, initialized as True.
None
use_doc_unwarping Whether to use the document unwarping module. bool|None
  • bool: True or False.
  • None: If set to None, the default value initialized by the production line will be used, initialized as True.
None
use_layout_detection Whether to use the layout detection module. bool|None
  • bool: True or False.
  • None: If set to None, the default value initialized by the production line will be used, initialized as True.
None
text_det_limit_side_len Image side length limit for text detection int|None
  • int: Any integer greater than 0;
  • None: If set to None, the default value initialized in production will be used, initialized as 960;
None
text_det_limit_type Type of image side length limit for text detection str|None
  • str: Supports min and max. min ensures the shortest side of the image is not less than det_limit_side_len, while max ensures the longest side is not greater than limit_side_len;
  • None: If set to None, the default value initialized in production will be used, initialized as max;
None
text_det_thresh Detection pixel threshold; in the output probability map, pixels with scores greater than this threshold will be considered as text pixels float|None
  • float: Any floating-point number greater than 0;
  • None: If set to None, the default value initialized in production will be used, initialized as 0.3;
None
text_det_box_thresh Detection box threshold; the average score of all pixels within the detection box must be greater than this threshold for the result to be considered a text region float|None
  • float: Any floating-point number greater than 0;
  • None: If set to None, the default value initialized in production will be used, initialized as 0.6;
None
text_det_unclip_ratio Text detection expansion ratio; this value determines the extent of expansion of the text region, with larger values resulting in greater expansion float|None
  • float: Any floating-point number greater than 0;
  • None: If set to None, the default value initialized in production will be used, initialized as 2.0;
None
text_rec_score_thresh Text recognition threshold; text results with scores greater than this threshold will be retained float|None
  • float: Any floating-point number greater than 0;
  • None: If set to None, the default value initialized in production will be used, initialized as 0.0, meaning no threshold is set;
None
use_table_cells_ocr_results Whether to enable Table-Cells-OCR mode, when not enabled, use global OCR result to fill to HTML table, when enabled, do OCR cell by cell and fill to HTML table (it will increase the time consuming). Both of them perform differently in different scenarios, please choose according to the actual situation. bool
  • boolTrue or False
False
use_e2e_wired_table_rec_model Whether to enable the wired table end-to-end prediction mode, when not enabled, using the table cells detection model prediction results filled to the HTML table, when enabled, using the end-to-end table structure recognition model cell prediction results filled to the HTML table. Both of them have different performance in different scenarios, please choose according to the actual situation. bool
  • boolTrue or False
False
use_e2e_wireless_table_rec_model Whether to enable the wireless table end-to-end prediction mode, when not enabled, using the table cells detection model prediction results filled to the HTML table, when enabled, using the end-to-end table structure recognition model cell prediction results filled to the HTML table. Both of them have different performance in different scenarios, please choose according to the actual situation. bool
  • boolTrue or False
False
If you need to use the end-to-end table structure recognition model, just replace the corresponding table structure recognition model with the end-to-end table structure recognition model in the pipeline config file, and then load the modified config file and modify the corresponding `predict()` method parameter. For example, if you need to use SLANet_plus to do end-to-end table recognition for wireless tables, just replace `model_name` with SLANet_plus in `WirelessTableStructureRecognition` in the config file (as shown below) and specify `use_e2e_ wireless_table_rec_model=True` in the prediction, the rest of the parts do not need to be modified, at this time the wireless table cells detection model will not take effect, but directly use SLANet_plus for end-to-end table recognition. ```yaml SubModules: WiredTableStructureRecognition: module_name: table_structure_recognition model_name: SLANeXt_wired model_dir: null WirelessTableStructureRecognition: module_name: table_structure_recognition model_name: SLANet_plus # Replace with the end-to-end table structure recognition model model_dir: null WiredTableCellsDetection: module_name: table_cells_detection model_name: RT-DETR-L_wired_table_cell_det model_dir: null WirelessTableCellsDetection: module_name: table_cells_detection model_name: RT-DETR-L_wireless_table_cell_det model_dir: null ``` (3) Process the prediction results, where each sample's prediction result is represented as a corresponding Result object, and supports operations such as printing, saving as an image, saving as an `xlsx` file, saving as an `HTML` file, and saving as a `json` file:
Method Method Description Parameter Parameter Type Parameter Description Default Value
print() Print the result to the terminal format_json bool Whether to format the output content using JSON indentation True
indent int Specify the indentation level to beautify the JSON data, making it more readable. Only effective when format_json is True 4
ensure_ascii bool Control whether non-ASCII characters are escaped to Unicode. If set to True, all non-ASCII characters will be escaped; False retains the original characters. Only effective when format_json is True False
save_to_json() Save the result as a JSON file save_path str The file path for saving. If it is a directory, the saved file will have the same name as the input file type None
indent int Specify the indentation level to beautify the JSON data, making it more readable. Only effective when format_json is True 4
ensure_ascii bool Control whether non-ASCII characters are escaped to Unicode. If set to True, all non-ASCII characters will be escaped; False retains the original characters. Only effective when format_json is True False
save_to_img() Save the result as an image file save_path str The file path for saving, supporting both directory and file paths None
save_to_xlsx() Save the result as an xlsx file save_path str The file path for saving, supporting both directory and file paths None
save_to_html() Save the result as an HTML file save_path str The file path for saving, supporting both directory and file paths None
- Calling the `print()` method will print the results to the terminal, and the content printed to the terminal is explained as follows: - `input_path`: `(str)` The input path of the image to be predicted. - `page_index`: `(Union[int, None])` If the input is a PDF file, it indicates which page of the PDF is currently being processed; otherwise, it is `None`. - `model_settings`: `(Dict[str, bool])` Configuration parameters for the pipeline models. - `use_doc_preprocessor`: `(bool)` Controls whether to enable the document preprocessing sub-line. - `use_layout_detection`: `(bool)` Controls whether to enable the layout detection sub-line. - `use_ocr_model`: `(bool)` Controls whether to enable the OCR sub-line. - `layout_det_res`: `(Dict[str, Union[List[numpy.ndarray], List[float]]])` Output results of the layout detection sub-module. Only exists when `use_layout_detection=True`. - `input_path`: `(Union[str, None])` The image path accepted by the layout detection module. When the input is a `numpy.ndarray`, it is saved as `None`. - `page_index`: `(Union[int, None])` If the input is a PDF file, it indicates which page of the PDF is currently being processed; otherwise, it is `None`. - `boxes`: `(List[Dict])` A list of detected layout seal region boxes, with each element in the list containing the following fields: - `cls_id`: `(int)` The class ID of the detected box. - `score`: `(float)` The confidence score of the detected box. - `coordinate`: `(List[float])` The coordinates of the four vertices of the detected box, in the order of x1, y1, x2, y2, representing the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner. - `doc_preprocessor_res`: `(Dict[str, Union[str, Dict[str, bool], int]])` Output results of the document preprocessing sub-line. Only exists when `use_doc_preprocessor=True`. - `input_path`: `(Union[str, None])` The image path accepted by the preprocessing sub-line. When the input is a `numpy.ndarray`, it is saved as `None`. - `model_settings`: `(Dict)` Model configuration parameters for the preprocessing sub-line. - `use_doc_orientation_classify`: `(bool)` Controls whether to enable document orientation classification. - `use_doc_unwarping`: `(bool)` Controls whether to enable document unwarping. - `angle`: `(int)` The predicted result of document orientation classification. When enabled, the values are [0,1,2,3], corresponding to [0°,90°,180°,270°]; when disabled, it is -1. - `dt_polys`: `(List[numpy.ndarray])` A list of polygon boxes for text detection. Each detection box is represented by a numpy array of 4 vertex coordinates, with the array shape being (4, 2) and data type being int16. - `dt_scores`: `(List[float])` A list of confidence scores for text detection boxes. - `text_det_params`: `(Dict[str, Dict[str, int, float]])` Configuration parameters for the text detection module. - `limit_side_len`: `(int)` The side length limit value for image preprocessing. - `limit_type`: `(str)` The processing method for side length limits. - `thresh`: `(float)` The confidence threshold for text pixel classification. - `box_thresh`: `(float)` The confidence threshold for text detection boxes. - `unclip_ratio`: `(float)` The expansion ratio for text detection boxes. - `text_type`: `(str)` The type of text detection, currently fixed as "general". - `text_rec_score_thresh`: `(float)` The filtering threshold for text recognition results. - `rec_texts`: `(List[str])` A list of text recognition results, containing only texts with confidence scores above `text_rec_score_thresh`. - `rec_scores`: `(List[float])` A list of confidence scores for text recognition, filtered by `text_rec_score_thresh`. - `rec_polys`: `(List[numpy.ndarray])` A list of text detection boxes filtered by confidence score, in the same format as `dt_polys`. - `rec_boxes`: `(numpy.ndarray)` An array of rectangular bounding boxes for detection boxes, with shape (n, 4) and dtype int16. Each row represents the [x_min, y_min, x_max, y_max] coordinates of a rectangular box, where (x_min, y_min) is the top-left corner and (x_max, y_max) is the bottom-right corner. - Calling the `save_to_json()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}.json`; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, the `numpy.array` types will be converted to lists. - Calling the `save_to_img()` method will save the visualization results to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}_ocr_res_img.{your_img_extension}`; if specified as a file, it will be saved directly to that file. (The pipeline usually contains many result images, it is not recommended to specify a specific file path directly, otherwise multiple images will be overwritten, leaving only the last image) - Calling the `save_to_html()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}.html`; if specified as a file, it will be saved directly to that file. In the General Table Recognition v2 Pipeline, the HTML form of the table in the image will be written to the specified HTML file. - Calling the `save_to_xlsx()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}.xlsx`; if specified as a file, it will be saved directly to that file. In the General Table Recognition v2 Pipeline, the Excel form of the table in the image will be written to the specified XLSX file. * Additionally, it is also possible to obtain the visualization image with results and prediction results through attributes, as follows:
Attribute Description
json Get the prediction result in json format.
img Get the visualization image in dict format.
- The prediction result obtained through the `json` attribute is of dict type, and the content is consistent with the content saved by calling the `save_to_json()` method. - The prediction result returned by the `img` attribute is a dictionary. The keys are `table_res_img`, `ocr_res_img`, `layout_res_img`, and `preprocessed_img`, corresponding to four `Image.Image` objects in order: the visualization image of table recognition results, the visualization image of OCR results, the visualization image of layout detection results, and the visualization image of image preprocessing. If a sub-module is not used, the corresponding result image will not be included in the dictionary. In addition, you can obtain the General Table Recognition v2 Pipeline configuration file and load the configuration file for prediction. You can execute the following command to save the result in `my_path`: ``` paddlex --get_pipeline_config table_recognition_v2 --save_path ./my_path ``` If you have obtained the configuration file, you can customize the settings for the General Table Recognition v2 Pipeline. Simply modify the `pipeline` parameter value in the `create_pipeline` method to the path of the pipeline configuration file. The example is as follows: ```python from paddlex import create_pipeline pipeline = create_pipeline(pipeline="./my_path/table_recognition_v2.yaml") output = pipeline.predict( input="table_recognition_v2.jpg", use_doc_orientation_classify=False, use_doc_unwarping=False, ) for res in output: res.print() res.save_to_img("./output/") res.save_to_xlsx("./output/") res.save_to_html("./output/") res.save_to_json("./output/") ``` Note: The parameters in the configuration file are the initialization parameters for the pipeline. If you want to change the initialization parameters of the General Table Recognition v2 Pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in the configuration file by specifying the path with `--pipeline`. ## 3. Development Integration / Deployment If the pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration / deployment. If you need to apply the pipeline directly in your Python project, you can refer to the example code in [2.2 Python Script Integration](#22-python脚本方式集成). In addition, PaddleX also provides three other deployment methods, which are detailed as follows: 🚀 High-Performance Inference: In actual production environments, many applications have strict performance requirements for deployment strategies, especially in terms of response speed, to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin, which aims to deeply optimize the performance of model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed information on high-performance inference, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference.en.md). ☁️ Serving Deployment: Serving Deployment is a common form of deployment in actual production environments. By encapsulating the inference functionality as a service, clients can access these services through network requests to obtain inference results. PaddleX supports various serving deployment solutions for pipelines. For detailed information on serving deployment, please refer to the [PaddleX Serving Deployment Guide](../../../pipeline_deploy/serving.en.md). Below are the API references for basic serving deployment and multi-language service invocation examples:
API Reference

For the main operations provided by the service:

Name Type Meaning
logId string The UUID of the request.
errorCode integer Error code. Fixed as 0.
errorMsg string Error message. Fixed as "Success".
result object The result of the operation.
Name Type Meaning
logId string The UUID of the request.
errorCode integer Error code. Same as the response status code.
errorMsg string Error message.

The main operations provided by the service are as follows:

Locate and recognize tables in the image.

POST /table-recognition

Name Type Meaning Required
file string The URL of a server-accessible image or PDF file, or the Base64-encoded content of such files. For PDF files exceeding 10 pages, only the first 10 pages will be used. Yes
fileType integer | null The type of file. 0 indicates a PDF file, 1 indicates an image file. If this attribute is not present in the request body, the file type will be inferred from the URL. No
useDocOrientationClassify boolean | null Please refer to the description of the use_doc_orientation_classify parameter of the pipeline object's predict method. No
useDocUnwarping boolean | null Please refer to the description of the use_doc_unwarping parameter of the pipeline object's predict method. No
useLayoutDetection boolean | null Please refer to the description of the use_layout_detection parameter of the pipeline object's predict method. No
useOcrModel boolean | null Please refer to the description of the use_ocr_model parameter of the pipeline object's predict method. No
layoutThreshold number | null Please refer to the description of the layout_threshold parameter of the pipeline object's predict method. No
layoutNms boolean | null Please refer to the description of the layout_nms parameter of the pipeline object's predict method. No
layoutUnclipRatio number | array | null Please refer to the description of the layout_unclip_ratio parameter of the pipeline object's predict method. No
layoutMergeBboxesMode string | null Please refer to the description of the layout_merge_bboxes_mode parameter of the pipeline object's predict method. No
textDetLimitSideLen integer | null Please refer to the description of the text_det_limit_side_len parameter of the pipeline object's predict method. No
textDetLimitType string | null Please refer to the description of the text_det_limit_type parameter of the pipeline object's predict method. No
textDetThresh number | null Please refer to the description of the text_det_thresh parameter of the pipeline object's predict method. No
textDetBoxThresh number | null Please refer to the description of the text_det_box_thresh parameter of the pipeline object's predict method. No
textDetUnclipRatio number | null Please refer to the description of the text_det_unclip_ratio parameter of the pipeline object's predict method. No
textRecScoreThresh number | null Please refer to the description of the text_rec_score_thresh parameter of the pipeline object's predict method. No
useTableCellsOcrResults boolean Please refer to the description of the use_table_cells_ocr_results parameter of the pipeline object's predict method. No
useE2eWiredTableRecModel boolean Please refer to the description of the use_e2e_wired_table_rec_model parameter of the pipeline object's predict method. No
useE2eWirelessTableRecModel boolean Please refer to the description of the use_e2e_wireless_table_rec_model parameter of the pipeline object's predict method. No
Name Type Meaning
tableRecResults object The table recognition results. The array length is 1 (for image input) or the smaller of the number of document pages and 10 (for PDF input). For PDF input, each element in the array represents the processing result of each page in the PDF file.
dataInfo object Information about the input data.

Each element in tableRecResults is an object with the following properties:

Name Type Description
prunedResult object A simplified version of the res field in the JSON representation of the result generated by the pipeline object's predict method, excluding the input_path field.
outputImages object | null Refer to the img property description of the pipeline prediction results. The images are in JPEG format and are Base64-encoded.
inputImage string | null The input image. The image is in JPEG format and is Base64-encoded.
Multi-language Service Invocation Example
Python
import base64
import requests

API_URL = "http://localhost:8080/table-recognition"
file_path = "./demo.jpg"

with open(file_path, "rb") as file:
    file_bytes = file.read()
    file_data = base64.b64encode(file_bytes).decode("ascii")

payload = {"file": file_data, "fileType": 1}

response = requests.post(API_URL, json=payload)

assert response.status_code == 200
result = response.json()["result"]
for i, res in enumerate(result["tableRecResults"]):
    print(res["prunedResult"])
    for img_name, img in res["outputImages"].items():
        img_path = f"{img_name}_{i}.jpg"
        with open(img_path, "wb") as f:
            f.write(base64.b64decode(img))
        print(f"Output image saved at {img_path}")

📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities directly on user devices, allowing the devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.en.md). You can choose the appropriate deployment method according to your needs to integrate the model into your AI application. ## 4. Custom Development If the default model weights provided by the General Table Recognition v2 Pipeline do not meet your requirements in terms of accuracy or speed, you can try to further fine-tune the existing models using your own domain-specific or application data to improve the recognition performance of the General Table Recognition v2 Pipeline in your specific scenario. ### 4.1 Model Fine-Tuning Since the General Table Recognition v2 Pipeline consists of several modules, if the overall performance is not satisfactory, the issue may lie in any one of these modules. You can analyze the images with poor recognition results to identify which module is problematic and refer to the corresponding fine-tuning tutorial links in the table below.
Scenario Module to Fine-Tune Reference Link
Table classification error Table Classification Module Link
Table cell positioning error Table Cell Detection Module Link
Table structure recognition error Table Structure Recognition Module Link
Failed to detect table area Layout Area Detection Module Link
Text detection omission Text Detection Module Link
Inaccurate text content Text Recognition Module Link
Inaccurate image rotation correction Document Image Orientation Classification Module Link
Inaccurate image distortion correction Text Image Correction Module Not supported for fine-tuning
### 4.2 Model Application After fine-tuning with your private dataset, you will obtain a local model weight file. If you need to use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the local path of the fine-tuned model weights to the corresponding position in the pipeline configuration file. ```yaml SubModules: LayoutDetection: module_name: layout_detection model_name: PicoDet_layout_1x_table model_dir: null TableClassification: module_name: table_classification model_name: PP-LCNet_x1_0_table_cls model_dir: null WiredTableStructureRecognition: module_name: table_structure_recognition model_name: SLANeXt_wired model_dir: null WirelessTableStructureRecognition: module_name: table_structure_recognition model_name: SLANeXt_wireless model_dir: null WiredTableCellsDetection: module_name: table_cells_detection model_name: RT-DETR-L_wired_table_cell_det model_dir: null WirelessTableCellsDetection: module_name: table_cells_detection model_name: RT-DETR-L_wireless_table_cell_det model_dir: null SubPipelines: DocPreprocessor: pipeline_name: doc_preprocessor use_doc_orientation_classify: True use_doc_unwarping: True SubModules: DocOrientationClassify: module_name: doc_text_orientation model_name: PP-LCNet_x1_0_doc_ori model_dir: null DocUnwarping: module_name: image_unwarping model_name: UVDoc model_dir: null GeneralOCR: pipeline_name: OCR text_type: general use_doc_preprocessor: False use_textline_orientation: False SubModules: TextDetection: module_name: text_detection model_name: PP-OCRv4_server_det model_dir: null limit_side_len: 960 limit_type: max thresh: 0.3 box_thresh: 0.4 unclip_ratio: 2.0 TextRecognition: module_name: text_recognition model_name: PP-OCRv4_server_rec model_dir: null batch_size: 1 score_thresh: 0 ``` Subsequently, refer to the command-line method or Python script method in [2.2 Local Experience](#22-Local-Experience) to load the modified pipeline configuration file. ## 5. Support for Multiple Hardware Devices PaddleX supports a variety of mainstream hardware devices including NVIDIA GPU, Kunlunxin XPU, Ascend NPU, and Cambricon MLU. Simply modify the `--device` parameter to seamlessly switch between different hardware devices. For example, if you use Ascend NPU for OCR pipeline inference, the CLI command is: ```bash paddlex --pipeline table_recognition_v2 \ --use_doc_orientation_classify=False \ --use_doc_unwarping=False \ --input table_recognition_v2.jpg \ --save_path ./output \ --device npu:0 ``` If you want to use the General Table Recognition v2 Pipeline on a wider variety of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md). If you want to use the Universal Table Recognition Pipeline v2 on a wider range of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).