---
comments: true
---
# General Table Recognition Pipeline v2 User Guide
## 1. Introduction to General Table Recognition Pipeline v2
Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in data entry, information retrieval, and document analysis. By using computer vision and machine learning algorithms, table recognition can convert complex table information into an editable format, making it easier for users to further process and analyze data.
The General Table Recognition Pipeline v2 is designed to solve table recognition tasks by identifying tables in images and outputting them in HTML format. Unlike the General Table Recognition Pipeline, this pipeline introduces two additional modules: table classification and table cell detection, which are linked with the table structure recognition module to complete the table recognition task. This pipeline can achieve accurate table predictions and is applicable in various fields such as general, manufacturing, finance, and transportation. It also provides flexible service deployment options, supporting multiple programming languages on various hardware. Additionally, it offers secondary development capabilities, allowing you to train and fine-tune models on your own dataset, with seamless integration of the trained models.
The General Table Recognition Pipeline v2 includes mandatory modules such as table structure recognition, table classification, table cell localization, text detection, and text recognition, as well as optional modules like layout area detection, document image orientation classification, and text image correction.
If you prioritize model accuracy, choose a model with higher accuracy; if you care more about inference speed, choose a model with faster inference speed; if you are concerned about model storage size, choose a model with a smaller storage size.
👉Model List Details
Table Recognition Module Models:
| Model | Model Download Link |
Accuracy (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Description |
| SLANeXt_wired |
Inference Model/Training Model |
69.65 |
-- |
-- |
351M |
The SLANeXt series is a new generation of table structure recognition models developed by the Baidu PaddlePaddle Vision Team. Compared to SLANet and SLANet_plus, SLANeXt focuses on recognizing table structures and has trained dedicated weights for both wired and wireless tables, significantly improving the recognition capabilities for all types of tables, especially for wired tables. |
| SLANeXt_wireless |
Inference Model/Training Model |
63.69 |
522.536 |
1845.37 |
6.9 M |
Table Classification Module Models:
| Model | Model Download Link |
Top1 Acc(%) |
GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
| PP-LCNet_x1_0_table_cls | Inference Model/Training Model |
94.2 |
2.35 / 0.47 |
4.03 / 1.35 |
6.6M |
Table Cell Detection Module Models:
| Model | Model Download Link |
mAP(%) |
GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Introduction |
| RT-DETR-L_wired_table_cell_det |
Inference Model/Training Model |
82.7 |
35.00 / 10.45 |
495.51 / 495.51 |
124M |
RT-DETR is the first real-time end-to-end object detection model. The Baidu PaddlePaddle Vision Team, based on RT-DETR-L as the base model, has completed pretraining on a self-built table cell detection dataset, achieving good performance for both wired and wireless table cell detection.
|
| RT-DETR-L_wireless_table_cell_det |
Inference Model/Training Model |
Text Detection Module Models:
| Model | Model Download Link |
Detection Hmean (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Introduction |
| PP-OCRv4_server_det | Inference Model/Training Model |
82.69 |
83.34 / 80.91 |
442.58 / 442.58 |
109 |
The server-side text detection model of PP-OCRv4, with higher accuracy, suitable for deployment on high-performance servers. |
| PP-OCRv4_mobile_det | Inference Model/Training Model |
77.79 |
8.79 / 3.13 |
51.00 / 28.58 |
4.7 |
The mobile text detection model of PP-OCRv4, with higher efficiency, suitable for deployment on edge devices. |
Text Recognition Module Models:
| Model | Model Download Link |
Recognition Avg Accuracy(%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Introduction |
| PP-OCRv4_mobile_rec | Inference Model/Training Model |
78.20 |
4.82 / 4.82 |
16.74 / 4.64 |
10.6 M |
PP-OCRv4 is the next version of the self-developed text recognition model PP-OCRv3 by Baidu PaddlePaddle Vision Team. By introducing data augmentation schemes and GTC-NRTR guidance branches, it further improves text recognition accuracy without changing the model inference speed. This model provides both server and mobile versions to meet industrial needs in different scenarios. |
| PP-OCRv4_server_rec | Inference Model/Training Model |
79.20 |
6.58 / 6.58 |
33.17 / 33.17 |
71.2 M |
| Model | Model Download Link |
Recognition Avg Accuracy(%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Introduction |
| ch_SVTRv2_rec | Inference Model/Training Model |
68.81 |
8.08 / 8.08 |
50.17 / 42.50 |
73.9 M |
SVTRv2 is a server-side text recognition model developed by the OpenOCR team from Fudan University's Vision and Learning Laboratory (FVL). It won the first prize in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task, with a 6% improvement in end-to-end recognition accuracy compared to PP-OCRv4.
|
| Model | Model Download Link |
Recognition Avg Accuracy(%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Introduction |
| ch_RepSVTR_rec | Inference Model/Training Model |
65.07 |
5.93 / 5.93 |
20.73 / 7.32 |
22.1 M |
RepSVTR is a mobile text recognition model based on SVTRv2. It won the first prize in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task, with a 2.5% improvement in end-to-end recognition accuracy compared to PP-OCRv4 and comparable inference speed. |
Layout Region Detection Module Models (Optional):
| Model | Model Download Link |
mAP(0.5) (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Description |
| PP-DocLayout-L | Inference Model/Training Model |
90.4 |
34.6244 / 10.3945 |
510.57 / - |
123.76 M |
A high-precision layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on RT-DETR-L. |
| PP-DocLayout-M | Inference Model/Training Model |
75.2 |
13.3259 / 4.8685 |
44.0680 / 44.0680 |
22.578 |
A balanced precision and efficiency layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on PicoDet-L. |
| PP-DocLayout-S | Inference Model/Training Model |
70.9 |
8.3008 / 2.3794 |
10.0623 / 9.9296 |
4.834 |
A high-efficiency layout region localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports, based on PicoDet-S. |
> ❗ The above list includes the 3 core models that are the focus of the layout detection module. The module supports a total of 11 full models, including multiple predefined models with different categories. The complete list of models is as follows:
* Table Layout Detection Models
| Model | Model Download Link |
mAP(0.5) (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) |
Description |
| PicoDet_layout_1x_table | Inference Model/Training Model |
97.5 |
8.02 / 3.09 |
23.70 / 20.41 |
7.4 M |
A high-efficiency layout region localization model trained on a self-built dataset using PicoDet-1x, capable of locating 1 type of region: tables |
* 3-category layout detection model, including tables, images, and seals
| Model | Model Download Link |
mAP(0.5) (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) |
Description |
| PicoDet-S_layout_3cls | Inference Model/Training Model |
88.2 |
8.99 / 2.22 |
16.11 / 8.73 |
4.8 |
A high-efficiency layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using the lightweight PicoDet-S model |
| PicoDet-L_layout_3cls | Inference Model/Training Model |
89.0 |
13.05 / 4.50 |
41.30 / 41.30 |
22.6 |
A layout region localization model with balanced efficiency and accuracy, trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L |
| RT-DETR-H_layout_3cls | Inference Model/Training Model |
95.8 |
114.93 / 27.71 |
947.56 / 947.56 |
470.1 |
A high-precision layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H |
* 5-category English document region detection model, including text, title, table, image, and list
| Model | Model Download Link |
mAP(0.5) (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) |
Description |
| PicoDet_layout_1x | Inference Model/Training Model |
97.8 |
9.03 / 3.10 |
25.82 / 20.70 |
7.4 |
A high-efficiency English document layout region localization model trained on the PubLayNet dataset using PicoDet-1x |
* 17-category region detection model, including 17 common layout categories: paragraph title, image, text, number, abstract, content, figure title, formula, table, table title, reference, document title, footnote, header, algorithm, footer, seal
| Model | Model Download Link |
mAP(0.5) (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Size (M) |
Description |
| PicoDet-S_layout_17cls | Inference Model/Training Model |
87.4 |
9.11 / 2.12 |
15.42 / 9.12 |
4.8 |
A high-efficiency layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using the lightweight PicoDet-S model |
| PicoDet-L_layout_17cls | Inference Model/Training Model |
89.0 |
13.50 / 4.69 |
43.32 / 43.32 |
22.6 |
A layout region localization model with balanced efficiency and accuracy, trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L |
| RT-DETR-H_layout_17cls | Inference Model/Training Model |
98.3 |
115.29 / 104.09 |
995.27 / 995.27 |
470.2 |
A high-precision layout region localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H |
Text Image Correction Module Model (Optional):
| Model | Model Download Link |
MS-SSIM (%) |
Model Storage Size (M) |
Description |
| UVDoc | Inference Model/Training Model |
54.40 |
30.3 M |
High-precision text image correction model |
The accuracy metrics of the model are measured from the DocUNet benchmark.
Document Image Orientation Classification Module Model (Optional):
| Model | Model Download Link |
Top-1 Acc (%) |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Description |
| PP-LCNet_x1_0_doc_ori | Inference Model/Training Model |
99.06 |
2.31 / 0.43 |
3.37 / 1.27 |
7 |
Document image classification model based on PP-LCNet_x1_0, containing four categories: 0 degrees, 90 degrees, 180 degrees, 270 degrees |
**Test Environment Description**:
- **Performance Test Environment**
- **Test Dataset**:
- Document Image Orientation Classification Model: A self-built dataset using PaddleX, covering various scenarios such as ID cards and documents, containing 1000 images.
- Layout Region Detection Model: A self-built layout region detection dataset using PaddleOCR, including 500 images of common document types such as Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
- Table Layout Detection Model: A self-built table region detection dataset using PaddleOCR, containing 7835 images of paper documents with tables in both Chinese and English.
- 3-Category Layout Detection Model: A self-built layout region detection dataset using PaddleOCR, including 1154 images of common document types such as Chinese and English papers, magazines, and research reports.
- 5-Category English Document Region Detection Model: The evaluation dataset from [PubLayNet](https://developer.ibm.com/exchanges/data/all/publaynet), containing 11245 images of English documents.
- 17-Category Region Detection Model: A self-built layout region detection dataset using PaddleOCR, including 892 images of common document types such as Chinese and English papers, magazines, and research reports.
- Table Structure Recognition Model: A self-built high-difficulty Chinese table recognition dataset using PaddleX.
- Table Cell Detection Model: A self-built evaluation dataset using PaddleX.
- Table Classification Model: A self-built evaluation dataset using PaddleX.
- Text Detection Model: A self-built Chinese dataset using PaddleOCR, covering various scenarios such as street scenes, web images, documents, and handwriting, with 500 images for detection.
- Chinese Recognition Model: A self-built Chinese dataset using PaddleOCR, covering various scenarios such as street scenes, web images, documents, and handwriting, with 11,000 images for text recognition.
- ch_SVTRv2_rec: The A-rank evaluation set from the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task.
- ch_RepSVTR_rec: The B-rank evaluation set from the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task.
- English Recognition Model: A self-built English dataset using PaddleX.
- Multilingual Recognition Model: A self-built multi-language dataset using PaddleX.
- **Hardware Configuration**:
- GPU: NVIDIA Tesla T4
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
- Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
- **Inference Mode Description**
| Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
|-------------|----------------------------------------|-------------------|---------------------------------------------------|
| Regular Mode| FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
| High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
## 2. Quick Start
All model pipelines provided by PaddleX can be quickly experienced. You can use the command line or Python locally to experience the effect of the general table recognition pipeline v2.
### 2.1 Online Experience
Online experience is not supported at the moment.
### 2.2 Local Experience
Before using the General Table Recognition pipeline v2 locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Local Installation Tutorial](../../../installation/installation.en.md).
### 2.1 Command Line Experience
You can quickly experience the effect of the table recognition pipeline with one command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg), and replace `--input` with the local path for prediction.
```bash
paddlex --pipeline table_recognition_v2 \
--input table_recognition.jpg \
--save_path ./output \
--device gpu:0
```
The relevant parameter descriptions can be referred to in the [2.2 Integration via Python Script](#22-python-script-integration) for parameter descriptions.
👉 After running, the result is: (Click to expand)
```bash
{'res': {'input_path': 'table_recognition.jpg', 'model_settings': {'use_doc_preprocessor': False, 'use_layout_detection': True, 'use_ocr_model': True}, 'layout_det_res': {'input_path': None, 'page_index': None, 'boxes': [{'cls_id': 0, 'label': 'Table', 'score': 0.9922188520431519, 'coordinate': [3.0127392, 0.14648987, 547.5102, 127.72023]}]}, 'overall_ocr_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_textline_orientation': False}, 'dt_polys': [array([[234, 6],
[316, 6],
[316, 25],
[234, 25]], dtype=int16), array([[38, 39],
[73, 39],
[73, 57],
[38, 57]], dtype=int16), array([[122, 32],
[201, 32],
[201, 58],
[122, 58]], dtype=int16), array([[227, 34],
[346, 34],
[346, 57],
[227, 57]], dtype=int16), array([[351, 34],
[391, 34],
[391, 58],
[351, 58]], dtype=int16), array([[417, 35],
[534, 35],
[534, 58],
[417, 58]], dtype=int16), array([[34, 70],
[78, 70],
[78, 90],
[34, 90]], dtype=int16), array([[287, 70],
[328, 70],
[328, 90],
[287, 90]], dtype=int16), array([[454, 69],
[496, 69],
[496, 90],
[454, 90]], dtype=int16), array([[ 17, 101],
[ 95, 101],
[ 95, 124],
[ 17, 124]], dtype=int16), array([[144, 101],
[178, 101],
[178, 122],
[144, 122]], dtype=int16), array([[278, 101],
[338, 101],
[338, 124],
[278, 124]], dtype=int16), array([[448, 101],
[503, 101],
[503, 121],
[448, 121]], dtype=int16)], 'text_det_params': {'limit_side_len': 960, 'limit_type': 'max', 'thresh': 0.3, 'box_thresh': 0.6, 'unclip_ratio': 2.0}, 'text_type': 'general', 'textline_orientation_angles': [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], 'text_rec_score_thresh': 0, 'rec_texts': ['CRuncover', 'Dres', '连续工作3', '取出来放在网上', '没想', '江、整江等八大', 'Abstr', 'rSrivi', '$709.', 'cludingGiv', '2.72', 'Ingcubic', '$744.78'], 'rec_scores': [0.9951260685920715, 0.9943379759788513, 0.9968608021736145, 0.9978817105293274, 0.9985721111297607, 0.9616036415100098, 0.9977153539657593, 0.987593948841095, 0.9906861186027527, 0.9959743618965149, 0.9970152378082275, 0.9977849721908569, 0.9984450936317444], 'rec_polys': [array([[234, 6],
[316, 6],
[316, 25],
[234, 25]], dtype=int16), array([[38, 39],
[73, 39],
[73, 57],
[38, 57]], dtype=int16), array([[122, 32],
[201, 32],
[201, 58],
[122, 58]], dtype=int16), array([[227, 34],
[346, 34],
[346, 57],
[227, 57]], dtype=int16), array([[351, 34],
[391, 34],
[391, 58],
[351, 58]], dtype=int16), array([[417, 35],
[534, 35],
[534, 58],
[417, 58]], dtype=int16), array([[34, 70],
[78, 70],
[78, 90],
[34, 90]], dtype=int16), array([[287, 70],
[328, 70],
[328, 90],
[287, 90]], dtype=int16), array([[454, 69],
[496, 69],
[496, 90],
[454, 90]], dtype=int16), array([[ 17, 101],
[ 95, 101],
[ 95, 124],
[ 17, 124]], dtype=int16), array([[144, 101],
[178, 101],
[178, 122],
[144, 122]], dtype=int16), array([[278, 101],
[338, 101],
[338, 124],
[278, 124]], dtype=int16), array([[448, 101],
[503, 101],
[503, 121],
[448, 121]], dtype=int16)], 'rec_boxes': array([[234, 6, 316, 25],
[ 38, 39, 73, 57],
[122, 32, 201, 58],
[227, 34, 346, 57],
[351, 34, 391, 58],
[417, 35, 534, 58],
[ 34, 70, 78, 90],
[287, 70, 328, 90],
[454, 69, 496, 90],
[ 17, 101, 95, 124],
[144, 101, 178, 122],
[278, 101, 338, 124],
[448, 101, 503, 121]], dtype=int16)}, 'table_res_list': [{'cell_box_list': [array([3.18822289e+00, 1.46489874e-01, 5.46996138e+02, 3.08782365e+01]), array([ 3.21032453, 31.1510637 , 110.20750237, 65.14108063]), array([110.18174553, 31.13076188, 213.00813103, 65.02860047]), array([212.96108818, 31.09959008, 404.19618034, 64.99535157]), array([404.08112907, 31.18304802, 547.00864983, 65.0847223 ]), array([ 3.21772957, 65.0738733 , 110.33685875, 96.07921387]), array([110.23703575, 65.02486207, 213.08839226, 96.01378419]), array([213.06095695, 64.96230103, 404.28425407, 95.97141816]), array([404.23704338, 65.04879548, 547.01273918, 96.03654267]), array([ 3.22793937, 96.08334137, 110.38572502, 127.08698823]), array([110.40586662, 96.10539795, 213.19943047, 127.07002045]), array([213.12627983, 96.0539148 , 404.42686272, 127.02842499]), array([404.33042717, 96.07251526, 547.01273918, 126.45088746])], 'pred_html': '| CRuncover |
| Dres | 连续工作3 | 取出来放在网上 没想 | 江、整江等八大 |
| Abstr | | rSrivi | $709. |
| cludingGiv | 2.72 | Ingcubic | $744.78 |
', 'table_ocr_pred': {'rec_polys': [array([[234, 6],
[316, 6],
[316, 25],
[234, 25]], dtype=int16), array([[38, 39],
[73, 39],
[73, 57],
[38, 57]], dtype=int16), array([[122, 32],
[201, 32],
[201, 58],
[122, 58]], dtype=int16), array([[227, 34],
[346, 34],
[346, 57],
[227, 57]], dtype=int16), array([[351, 34],
[391, 34],
[391, 58],
[351, 58]], dtype=int16), array([[417, 35],
[534, 35],
[534, 58],
[417, 58]], dtype=int16), array([[34, 70],
[78, 70],
[78, 90],
[34, 90]], dtype=int16), array([[287, 70],
[328, 70],
[328, 90],
[287, 90]], dtype=int16), array([[454, 69],
[496, 69],
[496, 90],
[454, 90]], dtype=int16), array([[ 17, 101],
[ 95, 101],
[ 95, 124],
[ 17, 124]], dtype=int16), array([[144, 101],
[178, 101],
[178, 122],
[144, 122]], dtype=int16), array([[278, 101],
[338, 101],
[338, 124],
[278, 124]], dtype=int16), array([[448, 101],
[503, 101],
[503, 121],
[448, 121]], dtype=int16)], 'rec_texts': ['CRuncover', 'Dres', '连续工作3', '取出来放在网上', '没想', '江、整江等八大', 'Abstr', 'rSrivi', '$709.', 'cludingGiv', '2.72', 'Ingcubic', '$744.78'], 'rec_scores': [0.9951260685920715, 0.9943379759788513, 0.9968608021736145, 0.9978817105293274, 0.9985721111297607, 0.9616036415100098, 0.9977153539657593, 0.987593948841095, 0.9906861186027527, 0.9959743618965149, 0.9970152378082275, 0.9977849721908569, 0.9984450936317444], 'rec_boxes': [array([234, 6, 316, 25], dtype=int16), array([38, 39, 73, 57], dtype=int16), array([122, 32, 201, 58], dtype=int16), array([227, 34, 346, 57], dtype=int16), array([351, 34, 391, 58], dtype=int16), array([417, 35, 534, 58], dtype=int16), array([34, 70, 78, 90], dtype=int16), array([287, 70, 328, 90], dtype=int16), array([454, 69, 496, 90], dtype=int16), array([ 17, 101, 95, 124], dtype=int16), array([144, 101, 178, 122], dtype=int16), array([278, 101, 338, 124], dtype=int16), array([448, 101, 503, 121], dtype=int16)]}}]}}
```
The explanation of the running result parameters can refer to the result interpretation in [2.2.2 Python Script Integration](#222-python-script-integration).
The visualization results are saved under `save_path`, where the visualization result of table recognition is as follows:
### 2.2 Python Script Integration
* The above command line is for a quick experience to view the effect. Generally, in a project, integration through code is often required. You can complete the pipeline's fast inference with just a few lines of code. The inference code is as follows:
```python
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="table_recognition_v2")
output = pipeline.predict(
input="table_recognition.jpg",
use_doc_orientation_classify=False,
use_doc_unwarping=False,
)
for res in output:
res.print()
res.save_to_img("./output/")
res.save_to_xlsx("./output/")
res.save_to_html("./output/")
res.save_to_json("./output/")
```
In the above Python script, the following steps are executed:
(1) The `create_pipeline()` function is used to instantiate a General Table Recognition Pipeline v2 object. The specific parameter descriptions are as follows:
| Parameter |
Description |
Type |
Default Value |
pipeline |
The name of the pipeline or the path to the pipeline configuration file. If it is a pipeline name, it must be a pipeline supported by PaddleX. |
str |
None |
config |
Specific configuration information for the pipeline (if set simultaneously with pipeline, it has higher priority than pipeline, and the pipeline name must be consistent with pipeline). |
dict[str, Any] |
None |
device |
The device used for pipeline inference. It supports specifying specific GPU card numbers, such as "gpu:0", specific card numbers for other hardware, such as "npu:0", or CPU as "cpu". |
str |
gpu:0 |
use_hpip |
Whether to enable high-performance inference. This is only available if the pipeline supports high-performance inference. |
bool |
False |
(2) Call the `predict()` method of the general table recognition pipeline v2 object for inference prediction. This method will return a `generator`. The parameters of the `predict()` method and their descriptions are as follows:
| Parameter |
Description |
Type |
Options |
Default Value |
input |
Data to be predicted, supports multiple input types, required |
Python Var|str|list |
- Python Var: Such as
numpy.ndarray representing image data
- str: Such as the local path of an image file or PDF file:
/root/data/img.jpg; such as URL link, such as the network URL of an image file or PDF file: Example; such as local directory, the directory must contain the images to be predicted, such as the local path: /root/data/ (currently does not support prediction of PDF files in the directory, PDF files need to be specified to a specific file path)
- List: The list elements must be the above types of data, such as
[numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"]
|
None |
device |
Pipeline inference device |
str|None |
- CPU: Such as
cpu indicating using CPU for inference;
- GPU: Such as
gpu:0 indicating using the 1st GPU for inference;
- NPU: Such as
npu:0 indicating using the 1st NPU for inference;
- XPU: Such as
xpu:0 indicating using the 1st XPU for inference;
- MLU: Such as
mlu:0 indicating using the 1st MLU for inference;
- DCU: Such as
dcu:0 indicating using the 1st DCU for inference;
- None: If set to
None, it will default to using the parameter value initialized by the pipeline. During initialization, it will preferentially use the local GPU 0 device, if not available, it will use the CPU device;
|
None |
use_doc_orientation_classify |
Whether to use the document orientation classification module |
bool|None |
- bool:
True or False;
- None: If set to
None, it will default to using the parameter value initialized by the pipeline, initialized to True;
|
None |
use_doc_unwarping |
Whether to use the document unwarping module |
bool|None |
- bool:
True or False;
- None: If set to
None, it will default to using the parameter value initialized by the pipeline, initialized to True;
|
None |
text_det_limit_side_len |
Image side length limit for text detection |
int|None |
- int: Any integer greater than
0;
- None: If set to
None, it will default to using the parameter value initialized by the pipeline, initialized to 960;
|
None |
text_det_limit_type |
Image side length limit type for text detection |
str|None |
- str: Supports
min and max, min indicates ensuring the shortest side of the image is not less than det_limit_side_len, max indicates ensuring the longest side of the image is not greater than limit_side_len
- None: If set to
None, it will default to using the parameter value initialized by the pipeline, initialized to max;
|
None |
text_det_thresh |
Detection pixel threshold, only pixels with scores greater than this threshold in the output probability map will be considered as text pixels |
float|None |
- float: Any floating-point number greater than
0
- None: If set to
None, it will default to using the parameter value initialized by the pipeline 0.3
|
None |
text_det_box_thresh |
Detection box threshold, the result will be considered as a text region if the average score of all pixels within the detection box is greater than this threshold |
float|None |
- float: Any floating-point number greater than
0
- None: If set to
None, it will default to using the parameter value initialized by the pipeline 0.6
|
None |
text_det_unclip_ratio |
Text detection expansion ratio, this method is used to expand the text region, the larger the value, the larger the expanded area |
float|None |
- float: Any floating-point number greater than
0
- None: If set to
None, it will default to using the parameter value initialized by the pipeline 2.0
|
None |
text_rec_score_thresh |
Text recognition threshold, text results with scores greater than this threshold will be retained |
float|None |
- float: Any floating-point number greater than
0
- None: If set to
None, it will default to using the parameter value initialized by the pipeline 0.0, meaning no threshold
|
None |
use_layout_detection |
Whether to use the layout detection module |
bool|None |
- bool:
True or False;
- None: If set to
None, the default value initialized by the pipeline will be used, initialized as True;
|
None |
layout_threshold |
Confidence threshold for layout detection; only scores above this threshold will be output |
float|dict|None |
- float: Any floating-point number greater than
0
- dict: Key is an integer category ID, value is any floating-point number greater than
0
- None: If set to
None, the default value initialized by the pipeline will be used, initialized as 0.5
|
None |
layout_nms |
Whether to use NMS post-processing for layout detection |
bool|None |
- bool:
True or False;
- None: If set to
None, the default value initialized by the pipeline will be used, initialized as True;
|
None |
layout_unclip_ratio |
Scale factor for the side length of detection boxes; if not specified, the default PaddleX official model configuration will be used |
float|list|None |
- float: A floating-point number greater than 0, e.g., 1.1, indicating that the center of the detection box remains unchanged, and both the width and height are scaled by 1.1 times
- list: e.g., [1.2, 1.5], indicating that the center of the detection box remains unchanged, the width is scaled by 1.2 times, and the height is scaled by 1.5 times
- None: If set to
None, the default value initialized by the pipeline will be used, initialized as 1.0
|
layout_merge_bboxes_mode |
Merging mode for detection boxes output by the model; if not specified, the default PaddleX official model configuration will be used |
string|None |
- large: When set to large, only the outermost box will be retained for overlapping detection boxes, and the inner overlapping boxes will be removed.
- small: When set to small, only the innermost boxes will be retained for overlapping detection boxes, and the outer overlapping boxes will be removed.
- union: No filtering of boxes will be performed; both inner and outer boxes will be retained.
- None: If set to
None, the default value initialized by the pipeline will be used, initialized as large
|
None |
(3) Process the prediction results. The prediction result for each sample is of type `dict`, and supports operations such as printing, saving as an image, saving as an `xlsx` file, saving as an `HTML` file, and saving as a `json` file:
| Method |
Description |
Parameter |
Type |
Parameter Description |
Default Value |
print() |
Print the result to the terminal |
format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. This is only effective when format_json is True |
4 |
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. This is only effective when format_json is True |
False |
save_to_json() |
Save the result as a JSON file |
save_path |
str |
The file path for saving. When it is a directory, the saved file name will match the input file name |
N/A |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. This is only effective when format_json is True |
4 |
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. This is only effective when format_json is True |
False |
save_to_img() |
Save the result as an image file |
save_path |
str |
The file path for saving, supporting both directory and file paths |
N/A |
save_to_xlsx() |
Save the result as an xlsx file |
save_path |
str |
The file path for saving, supporting both directory and file paths |
N/A |
save_to_html() |
Save the result as an HTML file |
save_path |
str |
The file path for saving, supporting both directory and file paths |
N/A |
- Calling the `print()` method will print the result to the terminal, with the printed content explained as follows:
- `input_path`: `(str)` The input path of the image to be predicted
- `model_settings`: `(Dict[str, bool])` Configuration parameters required for the pipeline model
- `use_doc_preprocessor`: `(bool)` Controls whether to enable the document preprocessing sub-pipeline
- `use_layout_detection`: `(bool)` Controls whether to enable the layout detection sub-pipeline
- `use_ocr_model`: `(bool)` Controls whether to enable the OCR sub-pipeline
- `layout_det_res`: `(Dict[str, Union[List[numpy.ndarray], List[float]]])` Output result of the layout detection sub-module. Only exists when `use_layout_detection=True`
- `input_path`: `(Union[str, None])` The image path accepted by the layout detection module, saved as `None` when the input is a `numpy.ndarray`
- `page_index`: `(Union[int, None])` If the input is a PDF file, it indicates the current page number of the PDF, otherwise it is `None`
- `boxes`: `(List[Dict])` List of detection boxes for layout seal regions, each element in the list contains the following fields
- `cls_id`: `(int)` The class ID of the detection box
- `score`: `(float)` The confidence score of the detection box
- `coordinate`: `(List[float])` The coordinates of the four corners of the detection box, in the order of x1, y1, x2, y2, representing the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner
- `doc_preprocessor_res`: `(Dict[str, Union[str, Dict[str, bool], int]])` The output result of the document preprocessing sub-pipeline. Exists only when `use_doc_preprocessor=True`
- `input_path`: `(Union[str, None])` The image path accepted by the image preprocessing sub-pipeline, saved as `None` when the input is `numpy.ndarray`
- `model_settings`: `(Dict)` Model configuration parameters for the preprocessing sub-pipeline
- `use_doc_orientation_classify`: `(bool)` Controls whether to enable document orientation classification
- `use_doc_unwarping`: `(bool)` Controls whether to enable document unwarping
- `angle`: `(int)` The prediction result of document orientation classification. When enabled, the values are [0,1,2,3], corresponding to [0°,90°,180°,270°] respectively; when not enabled, it is -1
- `dt_polys`: `(List[numpy.ndarray])` List of polygons for text detection. Each detection box is represented by a numpy array of 4 vertex coordinates, with the array shape being (4, 2) and the data type being int16
- `dt_scores`: `(List[float])` List of confidence scores for text detection boxes
- `text_det_params`: `(Dict[str, Dict[str, int, float]])` Configuration parameters for the text detection module
- `limit_side_len`: `(int)` The side length limit value during image preprocessing
- `limit_type`: `(str)` The processing method for the side length limit
- `thresh`: `(float)` Confidence threshold for text pixel classification
- `box_thresh`: `(float)` Confidence threshold for text detection boxes
- `unclip_ratio`: `(float)` Expansion coefficient for text detection boxes
- `text_type`: `(str)` Type of text detection, currently fixed as "general"
- `text_rec_score_thresh`: `(float)` Filtering threshold for text recognition results
- `rec_texts`: `(List[str])` List of text recognition results, only includes texts with confidence scores exceeding `text_rec_score_thresh`
- `rec_scores`: `(List[float])` List of confidence scores for text recognition, filtered by `text_rec_score_thresh`
- `rec_polys`: `(List[numpy.ndarray])` List of text detection boxes after confidence filtering, same format as `dt_polys`
- `rec_boxes`: `(numpy.ndarray)` Array of rectangular bounding boxes for detection boxes, with shape (n, 4) and dtype int16. Each row represents the [x_min, y_min, x_max, y_max] coordinates of a rectangle, where (x_min, y_min) is the top-left coordinate and (x_max, y_max) is the bottom-right coordinate
- Calling the `save_to_json()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}.json`; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, the `numpy.array` types will be converted to lists.
- Calling the `save_to_img()` method will save the visualization results to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}_ocr_res_img.{your_img_extension}`; if specified as a file, it will be saved directly to that file. (The pipeline usually contains many result images, it is not recommended to specify a specific file path directly, otherwise multiple images will be overwritten, leaving only the last image)
- Calling the `save_to_html()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}.html`; if specified as a file, it will be saved directly to that file. In the general table recognition pipeline v2, the HTML form of the table in the image will be written to the specified HTML file.
- Calling the `save_to_xlsx()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}.xlsx`; if specified as a file, it will be saved directly to that file. In the general table recognition pipeline v2, the Excel form of the table in the image will be written to the specified XLSX file.
* Additionally, it also supports obtaining visualized images and prediction results through attributes, as follows:
| Attribute |
Attribute Description |
json |
Get the predicted json format result |
img |
Get the visualized image in dict format |
- The prediction result obtained by the `json` attribute is a dict type of data, with content consistent with the content saved by calling the `save_to_json()` method.
- The prediction result returned by the `img` attribute is a dictionary type of data. The keys are `table_res_img`, `ocr_res_img`, `layout_res_img`, and `preprocessed_img`, and the corresponding values are four `Image.Image` objects, in order: visualized image of table recognition result, visualized image of OCR result, visualized image of layout region detection result, and visualized image of image preprocessing. If a sub-module is not used, the corresponding result image is not included in the dictionary.
In addition, you can obtain the general table recognition pipeline v2 configuration file and load the configuration file for prediction. You can execute the following command to save the result in `my_path`:
```
paddlex --get_pipeline_config table_recognition_v2 --save_path ./my_path
```
If you have obtained the configuration file, you can customize the settings for the General Table Recognition Pipeline v2. Simply modify the `pipeline` parameter value in the `create_pipeline` method to the path of the pipeline configuration file. The example is as follows:
```python
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/table_recognition_v2.yaml")
output = pipeline.predict(
input="table_recognition.jpg",
use_doc_orientation_classify=False,
use_doc_unwarping=False,
)
for res in output:
res.print()
res.save_to_img("./output/")
res.save_to_xlsx("./output/")
res.save_to_html("./output/")
res.save_to_json("./output/")
```
Note: The parameters in the configuration file are the initialization parameters for the pipeline. If you want to change the initialization parameters of the General Table Recognition Pipeline v2, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in the configuration file by specifying the path with `--pipeline`.
## 3. Development Integration/Deployment
If the pipeline meets your requirements for inference speed and accuracy, you can proceed with development integration/deployment.
If you need to directly apply the pipeline in your Python project, you can refer to the example code in [2.2 Integration via Python Script](#22-integration-via-python-script).
In addition, PaddleX also provides three other deployment methods, detailed as follows:
🚀 High-Performance Inference: In actual production environments, many applications have stringent performance requirements (especially response speed) for deployment strategies to ensure efficient system operation and smooth user experience. Therefore, PaddleX provides a high-performance inference plugin designed to deeply optimize the performance of model inference and pre/post-processing, significantly speeding up the end-to-end process. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference.en.md).
☁️ Service Deployment: Service deployment is a common form of deployment in actual production environments. By encapsulating inference functions as services, clients can access these services via network requests to obtain inference results. PaddleX supports multiple pipeline service deployment solutions. For detailed pipeline service deployment procedures, please refer to the [PaddleX Service Deployment Guide](../../../pipeline_deploy/serving.en.md).
Below are the API references and multi-language service call examples for basic service deployment:
API Reference
For the main operations provided by the service:
- The HTTP request method is POST.
- Both the request body and response body are JSON data (JSON objects).
- When the request is processed successfully, the response status code is
200, and the properties of the response body are as follows:
| Name |
Type |
Meaning |
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Fixed as 0. |
errorMsg |
string |
Error message. Fixed as "Success". |
result |
object |
The result of the operation. |
- When the request is not processed successfully, the properties of the response body are as follows:
| Name |
Type |
Meaning |
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Same as the response status code. |
errorMsg |
string |
Error message. |
The main operations provided by the service are as follows:
Locate and recognize tables in the image.
POST /table-recognition
- The properties of the request body are as follows:
| Name |
Type |
Meaning |
Required |
file |
string |
The URL of an image file or PDF file accessible by the server, or the Base64-encoded content of the above file types. For PDF files with more than 10 pages, only the content of the first 10 pages will be used. |
Yes |
fileType |
integer | null |
The type of the file. 0 indicates a PDF file, and 1 indicates an image file. If this attribute is not present in the request body, the file type will be inferred based on the URL. |
No |
useDocOrientationClassify |
boolean | null |
See the use_doc_orientation_classify parameter description in the production predict method. |
No |
useDocUnwarping |
boolean | null |
See the use_doc_unwarping parameter description in the production predict method. |
No |
useLayoutDetection |
boolean | null |
See the use_layout_detection parameter description in the production predict method. |
No |
useOcrModel |
boolean | null |
See the use_ocr_model parameter description in the production predict method. |
No |
layoutThreshold |
number | null |
See the layout_threshold parameter description in the production predict method. |
No |
layoutNms |
boolean | null |
See the description of the layout_nms parameter in the production predict method. |
No |
layoutUnclipRatio |
number | array | null |
See the description of the layout_unclip_ratio parameter in the production predict method. |
No |
layoutMergeBboxesMode |
string | null |
See the description of the layout_merge_bboxes_mode parameter in the production predict method. |
No |
textDetLimitSideLen |
integer | null |
See the description of the text_det_limit_side_len parameter in the production predict method. |
No |
textDetLimitType |
string | null |
See the description of the text_det_limit_type parameter in the production predict method. |
No |
textDetThresh |
number | null |
See the description of the text_det_thresh parameter in the production predict method. |
No |
textDetBoxThresh |
number | null |
See the description of the text_det_box_thresh parameter in the production predict method. |
No |
textDetUnclipRatio |
number | null |
See the description of the text_det_unclip_ratio parameter in the production predict method. |
No |
textRecScoreThresh |
number | null |
See the description of the text_rec_score_thresh parameter in the production predict method. |
No |
Each element in tableRecResults is an object with the following properties:
| Name |
Type |
Description |
prunedResult |
object |
A simplified version of the res field in the JSON representation generated by the predict method of the pipeline object, with the input_path field removed. |
outputImages |
object | null |
See the description of the img attribute in the result of the pipeline prediction. Images are in JPEG format and encoded with Base64. |
inputImage |
string | null |
The input image. The image is in JPEG format and encoded with Base64. |
Multi-language service call example
Python
import base64
import requests
API_URL = "http://localhost:8080/table-recognition"
file_path = "./demo.jpg"
with open(file_path, "rb") as file:
file_bytes = file.read()
file_data = base64.b64encode(file_bytes).decode("ascii")
payload = {"file": file_data, "fileType": 1}
response = requests.post(API_URL, json=payload)
assert response.status_code == 200
result = response.json()["result"]
for i, res in enumerate(result["tableRecResults"]):
print("Detected tables:")
print(res["tables"])
layout_img_path = f"layout_{i}.jpg"
with open(layout_img_path, "wb") as f:
f.write(base64.b64decode(res["layoutImage"]))
ocr_img_path = f"ocr_{i}.jpg"
with open(ocr_img_path, "wb") as f:
f.write(base64.b64decode(res["ocrImage"]))
print(f"Output images saved at {layout_img_path} and {ocr_img_path}")
📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities directly on user devices, allowing them to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.en.md).
You can choose the appropriate deployment method based on your needs to integrate the model pipeline into subsequent AI applications.
## 4. Custom Development
If the default model weights provided by the General Table Recognition pipeline v2 do not meet your requirements in terms of accuracy or speed, you can try to further fine-tune the existing models using your own domain-specific or application data to improve the recognition performance of the General Table Recognition pipeline v2 in your specific scenario.
### 4.1 Model Fine-Tuning
Since the General Table Recognition pipeline v2 consists of several modules, if the overall performance is not satisfactory, the issue may lie in any one of these modules. You can analyze the images with poor recognition results to identify which module is problematic and refer to the corresponding fine-tuning tutorial links in the table below.
| Scenario |
Fine-Tuning Module |
Fine-Tuning Reference Link |
| Table classification errors |
Table Classification Module |
Link |
| Table cell localization errors |
Table Cell Detection Module |
Link |
| Table structure recognition errors |
Table Structure Recognition Module |
Link |
| Failure to detect table regions |
Layout Region Detection Module |
Link |
| Missing text detection |
Text Detection Module |
Link |
| Inaccurate text content |
Text Recognition Module |
Link |
| Inaccurate whole-image rotation correction |
Document Image Orientation Classification Module |
Link |
| Inaccurate image distortion correction |
Text Image Correction Module |
Fine-tuning not supported |
### 4.2 Model Application
After fine-tuning with your private dataset, you can obtain the local model weight file.
To use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the local path of the fine-tuned model weights in the corresponding position in the configuration file:
```yaml
SubModules:
LayoutDetection:
module_name: layout_detection
model_name: PicoDet_layout_1x_table
model_dir: null # 替换为微调后的版面区域检测模型权重路径
TableClassification:
module_name: table_classification
model_name: PP-LCNet_x1_0_table_cls
model_dir: null # 替换为微调后的表格分类模型权重路径
WiredTableStructureRecognition:
module_name: table_structure_recognition
model_name: SLANeXt_wired
model_dir: null # 替换为微调后的有线表格结构识别模型权重路径
WirelessTableStructureRecognition:
module_name: table_structure_recognition
model_name: SLANeXt_wireless
model_dir: null # 替换为微调后的无线表格结构识别模型权重路径
WiredTableCellsDetection:
module_name: table_cells_detection
model_name: RT-DETR-L_wired_table_cell_det
model_dir: null # 替换为微调后的有线表格单元格检测模型权重路径
WirelessTableCellsDetection:
module_name: table_cells_detection
model_name: RT-DETR-L_wireless_table_cell_det
model_dir: null # 替换为微调后的无线表格单元格检测模型权重路径
SubPipelines:
DocPreprocessor:
pipeline_name: doc_preprocessor
use_doc_orientation_classify: True
use_doc_unwarping: True
SubModules:
DocOrientationClassify:
module_name: doc_text_orientation
model_name: PP-LCNet_x1_0_doc_ori
model_dir: null # 替换为微调后的文档图像方向分类模型权重路径
DocUnwarping:
module_name: image_unwarping
model_name: UVDoc
model_dir: null
GeneralOCR:
pipeline_name: OCR
text_type: general
use_doc_preprocessor: False
use_textline_orientation: False
SubModules:
TextDetection:
module_name: text_detection
model_name: PP-OCRv4_server_det
model_dir: null # 替换为微调后的文本检测模型权重路径
limit_side_len: 960
limit_type: max
thresh: 0.3
box_thresh: 0.6
unclip_ratio: 2.0
TextRecognition:
module_name: text_recognition
model_name: PP-OCRv4_server_rec
model_dir: null # 替换为微调后文本识别的模型权重路径
batch_size: 1
score_thresh: 0
```
Subsequently, refer to the command line method or Python script method in [2.2 Local Experience](#22-本地体验) to load the modified pipeline configuration file.
## 5. Multi-Hardware Support
PaddleX supports various mainstream hardware devices such as NVIDIA GPU, Kunlun Chip XPU, Ascend NPU, and Cambricon MLU. Simply modify the `--device` parameter to achieve seamless switching between different hardware.
For example, if you use Ascend NPU for OCR pipeline inference, the Python command used is:
```bash
paddlex --pipeline table_recognition_v2 \
--input table_recognition.jpg \
--save_path ./output \
--device npu:0
```
If you want to use the General Table Recognition pipeline v2 on a wider variety of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).