--- comments: true --- # Formula Recognition Pipeline User Guide ## 1. Introduction to Formula Recognition Pipeline Formula recognition is a technology that automatically identifies and extracts LaTeX formula content and structure from documents or images. It is widely used in fields such as mathematics, physics, and computer science for document editing and data analysis. By using computer vision and machine learning algorithms, formula recognition can convert complex mathematical formula information into editable LaTeX format, facilitating further processing and analysis of data. The formula recognition pipeline is designed to solve formula recognition tasks by extracting formula information from images and outputting it in LaTeX source code format. This pipeline integrates the advanced formula recognition model PP-FormulaNet developed by the PaddlePaddle Vision Team and the well-known formula recognition model UniMERNet. It is an end-to-end formula recognition system that supports the recognition of simple printed formulas, complex printed formulas, and handwritten formulas. Additionally, it includes functions for image orientation correction and distortion correction. Based on this pipeline, precise formula content prediction can be achieved, covering various application scenarios in education, research, finance, manufacturing, and other fields. The pipeline also provides flexible deployment options, supporting multiple hardware devices and programming languages. Moreover, it offers the capability for secondary development. You can train and optimize the pipeline on your own dataset, and the trained model can be seamlessly integrated. The formula recognition pipeline includes a mandatory formula recognition module, as well as optional layout detection, document image orientation classification, and text image unwarping modules. The document image orientation classification module and the text image unwarping module are integrated into the formula recognition pipeline as a document preprocessing sub-pipeline. Each module contains multiple models, and you can choose the model based on the benchmark test data below. If you prioritize model accuracy, choose a model with higher precision; if you care more about inference speed, choose a faster model; if you are concerned about model storage size, choose a smaller model.

Document Image Orientation Classification Module (Optional):

ModelModel Download Link Top-1 Acc (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M) Introduction
PP-LCNet_x1_0_doc_oriInference Model/Training Model 99.06 3.84845 9.23735 7 A document image classification model based on PP-LCNet_x1_0, with four categories: 0 degrees, 90 degrees, 180 degrees, and 270 degrees.
Note: The evaluation dataset for the above accuracy metrics is a self-built dataset covering multiple scenarios such as certificates and documents, with 1,000 images. The GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. The CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.

Text Image Correction Module (Optional):

ModelModel Download Link CER Model Storage Size (M) Introduction
UVDocInference Model/Training Model 0.179 30.3 M High-precision text image correction model
Note: The accuracy metrics of the model are measured from the DocUNet benchmark.

Layout Detection Module (Optional):

ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M) Introduction
PP-DocLayout-LInference Model/Training Model 90.4 34.5252 1454.27 123.76 M A high-precision layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using RT-DETR-L.
PP-DocLayout-MInference Model/Training Model 75.2 15.9 160.1 22.578 A layout area localization model with balanced precision and efficiency, trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-L.
PP-DocLayout-SInference Model/Training Model 70.9 13.8 46.7 4.834 A high-efficiency layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-S.
Note: The evaluation dataset for the above precision metrics is a self-built layout area detection dataset by PaddleOCR, containing 500 common document-type images of Chinese and English papers, magazines, contracts, books, exams, and research reports. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. > ❗ The above list includes the 3 core models that are key supported by the text recognition module. The module actually supports a total of 11 full models, including several predefined models with different categories. The complete model list is as follows:
👉 Details of Model List
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M) Introduction
PicoDet_layout_1x_tableInference Model/Training Model 97.5 12.623 90.8934 7.4 M A high-efficiency layout area localization model trained on a self-built dataset using PicoDet-1x, capable of detecting table regions.
Note: The evaluation dataset for the above precision metrics is a self-built layout table area detection dataset by PaddleOCR, containing 7835 Chinese and English document images with tables. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. * 3-Class Layout Detection Model, including Table, Image, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M) Introduction
PicoDet-S_layout_3clsInference Model/Training Model 88.2 13.5 45.8 4.8 A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S.
PicoDet-L_layout_3clsInference Model/Training Model 89.0 15.7 159.8 22.6 A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L.
RT-DETR-H_layout_3clsInference Model/Training Model 95.8 114.6 3832.6 470.1 A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H.
Note: The evaluation dataset for the above precision metrics is a self-built layout area detection dataset by PaddleOCR, containing 1154 common document images of Chinese and English papers, magazines, and research reports. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. * 5-Class English Document Area Detection Model, including Text, Title, Table, Image, and List
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M) Introduction
PicoDet_layout_1xInference Model/Training Model 97.8 13.0 91.3 7.4 A high-efficiency English document layout area localization model trained on the PubLayNet dataset using PicoDet-1x.
Note: The evaluation dataset for the above precision metrics is the [PubLayNet](https://developer.ibm.com/exchanges/data/all/publaynet/) dataset, containing 11245 English document images. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. * 17-Class Area Detection Model, including 17 common layout categories: Paragraph Title, Image, Text, Number, Abstract, Content, Figure Caption, Formula, Table, Table Caption, References, Document Title, Footnote, Header, Algorithm, Footer, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M) Introduction
PicoDet-S_layout_17clsInference Model/Training Model 87.4 13.6 46.2 4.8 A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S.
PicoDet-L_layout_17clsInference Model/Training Model 89.0 17.2 160.2 22.6 A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L.
RT-DETR-H_layout_17clsInference Model/Training Model 98.3 115.1 3827.2 470.2 A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H.
Note: The evaluation dataset for the above precision metrics is a self-built layout area detection dataset by PaddleOCR, containing 892 common document images of Chinese and English papers, magazines, and research reports. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.

Formula Recognition Module

ModelModel Download Link Avg-BLEU(%) GPU Inference Time (ms) Model Storage Size (M) Introduction
UniMERNetInference Model/Training Model 86.13 2266.96 1.4 G UniMERNet is a formula recognition model developed by Shanghai AI Lab. It uses Donut Swin as the encoder and MBartDecoder as the decoder. The model is trained on a dataset of one million samples, including simple formulas, complex formulas, scanned formulas, and handwritten formulas, significantly improving the recognition accuracy of real-world formulas.
PP-FormulaNet-SInference Model/Training Model 87.12 202.25 167.9 M PP-FormulaNet is an advanced formula recognition model developed by the Baidu PaddlePaddle Vision Team. The PP-FormulaNet-S version uses PP-HGNetV2-B4 as its backbone network. Through parallel masking and model distillation techniques, it significantly improves inference speed while maintaining high recognition accuracy, making it suitable for applications requiring fast inference. The PP-FormulaNet-L version, on the other hand, uses Vary_VIT_B as its backbone network and is trained on a large-scale formula dataset, showing significant improvements in recognizing complex formulas compared to PP-FormulaNet-S.
PP-FormulaNet-LInference Model/Training Model 92.13 1976.52 535.2 M
LaTeX_OCR_recInference Model/Training Model 71.63 - 89.7 M LaTeX-OCR is a formula recognition algorithm based on an autoregressive large model. It uses Hybrid ViT as the backbone network and a transformer as the decoder, significantly improving the accuracy of formula recognition.
Note: The above accuracy metrics are measured using an internally built formula recognition test set within PaddleX. The BLEU score of LaTeX_OCR_rec on the LaTeX-OCR formula recognition test set is 0.8821. All model GPU inference times are based on machines with Tesla V100 GPUs, with precision type FP32. ## 2. Quick Start PaddleX supports experiencing the formula recognition pipeline locally using the command line or Python. Before using the formula recognition pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Local Installation Tutorial](../../../installation/installation.en.md). ### 2.1 Command Line Experience You can quickly experience the effect of the formula recognition pipeline with one command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/demo_image/pipelines/general_formula_recognition_001.png), and replace `--input` with the local path for prediction. ```bash paddlex --pipeline formula_recognition \ --input general_formula_recognition_001.png \ --use_layout_detection True \ --use_doc_orientation_classify False \ --use_doc_unwarping False \ --layout_threshold 0.5 \ --layout_nms True \ --layout_unclip_ratio 1.0 \ --layout_merge_bboxes_mode large \ --save_path ./output \ --device gpu:0 ``` The relevant parameter descriptions can be referenced from [2.2 Integration via Python Script](#22-integration-via-python-script). After running, the results will be printed to the terminal, as shown below: ```bash {'res': {'input_path': 'general_formula_recognition.png', 'page_index': None, 'model_settings': {'use_doc_preprocessor': False,'use_layout_detection': True}, 'layout_det_res': {'input_path': None, 'boxes': [{'cls_id': 2, 'label': 'text', 'score': 0.9778407216072083, 'coordinate': [271.257, 648.50824, 1040.2291, 774.8482]}, ...]}, 'formula_res_list': [{'rec_formula': '\\small\\begin{aligned}{p(\\mathbf{x})=c(\\mathbf{u})\\prod_{i}p(x_{i}).}\\\\ \\end{aligned}', 'formula_region_id': 1, 'dt_polys': ([553.0718, 802.0996, 758.75635, 853.093],)}, ...]}} ``` The explanation of the running result parameters can refer to the result interpretation in [2.2 Integration via Python Script](#22-integration-via-python-script). The visualization results are saved under `save_path`, where the visualization result of formula recognition is as follows: If you need to visualize the formula recognition pipeline, you need to run the following command to install the LaTeX rendering environment. Currently, visualization of the formula recognition pipeline only supports the Ubuntu environment, and other environments are not supported. For complex formulas, the LaTeX result may contain some advanced representations that may not be successfully displayed in environments such as Markdown: ```bash sudo apt-get update sudo apt-get install texlive texlive-latex-base texlive-latex-extra -y ``` Note: Due to the need to render each formula image during the formula recognition visualization process, the process takes a long time. Please be patient. ### 2.2 Integration via Python Script A few lines of code can quickly complete the production line inference. Taking the formula recognition production line as an example: ```python from paddlex import create_pipeline pipeline = create_pipeline(pipeline="formula_recognition") output = pipeline.predict( input="./general_formula_recognition_001.png", use_layout_detection=True , use_doc_orientation_classify=False, use_doc_unwarping=False, layout_threshold=0.5, layout_nms=True, layout_unclip_ratio=1.0, layout_merge_bboxes_mode="large" ) for res in output: res.print() res.save_to_img(save_path="./output/") res.save_to_json(save_path="./output/") ``` In the above Python script, the following steps are executed: (1) Instantiate the formula recognition pipeline object through `create_pipeline()`, with specific parameters as follows:
Parameter Description Type Default
pipeline Pipeline name or path to pipeline config file, if it's set as a pipeline name, it must be a pipeline supported by PaddleX. str None
config Specific configuration information for the pipeline (if set simultaneously with the pipeline, it takes precedence over the pipeline, and the pipeline name must match the pipeline). dict[str, Any] None
device Pipeline inference device. Supports specifying the specific GPU card number, such as "gpu:0", other hardware specific card numbers, such as "npu:0", CPU such as "cpu". str None
use_hpip Whether to enable high-performance inference, only available when the pipeline supports high-performance inference. bool False
(2) Call the `predict()` method of the formula recognition production line object for inference prediction. This method will return a `generator`. Below are the parameters of the `predict()` method and their descriptions:
Parameter Description Type Options Default Value
input Data to be predicted, supporting multiple input types, required Python Var|str|list
  • Python Var: Image data represented by numpy.ndarray
  • str: Local path of image or PDF file, e.g., /root/data/img.jpg; URL link, e.g., network URL of image or PDF file: Example; Local directory, the directory should contain images to be predicted, e.g., local path: /root/data/ (currently does not support prediction of PDF files in directories; PDF files must be specified with a specific file path)
  • List: Elements of the list must be of the above types, e.g., [numpy.ndarray, numpy.ndarray], [\"/root/data/img1.jpg\", \"/root/data/img2.jpg\"], [\"/root/data1\", \"/root/data2\"]
None
device Production line inference device str|None
  • CPU: e.g., cpu indicates using CPU for inference;
  • GPU: e.g., gpu:0 indicates using the 1st GPU for inference;
  • NPU: e.g., npu:0 indicates using the 1st NPU for inference;
  • XPU: e.g., xpu:0 indicates using the 1st XPU for inference;
  • MLU: e.g., mlu:0 indicates using the 1st MLU for inference;
  • DCU: e.g., dcu:0 indicates using the 1st DCU for inference;
  • None: If set to None, the default value initialized by the production line will be used. During initialization, the local GPU 0 will be prioritized; if unavailable, the CPU will be used.
None
use_layout_detection Whether to use the document layout detection module bool|None
  • bool: True or False;
  • None: If set to None, the default value initialized by the production line will be used, initialized as True.
None
use_doc_orientation_classify Whether to use the document orientation classification module bool|None
  • bool: True or False;
  • None: If set to None, the default value initialized by the production line will be used, initialized as True.
None
use_doc_unwarping Whether to use the document unwarping module bool|None
  • bool: True or False;
  • None: If set to None, the default value initialized by the production line will be used, initialized as True.
None
layout_threshold Threshold for filtering low-confidence prediction results; if not specified, the default PaddleX official model configuration will be used float/dict/None
  • float, e.g., 0.2, indicating filtering out all bounding boxes with confidence scores below 0.2
  • Dictionary, with int keys representing cls_id and float values as thresholds. For example, {0: 0.45, 2: 0.48, 7: 0.4} indicates applying a threshold of 0.45 for class ID 0, 0.48 for class ID 2, and 0.4 for class ID 7
  • None: If not specified, the default PaddleX official model configuration will be used
None
layout_nms Whether to use NMS post-processing to filter overlapping bounding boxes; if not specified, the default PaddleX official model configuration will be used bool/None
  • bool: True or False, indicating whether to use NMS for post-processing to filter overlapping bounding boxes
  • None: If not specified, the default PaddleX official model configuration will be used
None
layout_unclip_ratio Scaling factor for the side length of bounding boxes; if not specified, the default PaddleX official model configuration will be used float/list/None
  • float: A positive float number, e.g., 1.1, indicating that the center of the bounding box remains unchanged while the width and height are both scaled up by a factor of 1.1
  • List: e.g., [1.2, 1.5], indicating that the center of the bounding box remains unchanged while the width is scaled up by a factor of 1.2 and the height by a factor of 1.5
  • None: If not specified, the default PaddleX official model configuration will be used
None
layout_merge_bboxes_mode Merging mode for the bounding boxes output by the model; if not specified, the default PaddleX official model configuration will be used string/None
  • large: When set to "large", only the largest outer bounding box will be retained for overlapping bounding boxes, and the inner overlapping boxes will be removed.
  • small: When set to "small", only the smallest inner bounding boxes will be retained for overlapping bounding boxes, and the outer overlapping boxes will be removed.
  • union: No filtering of bounding boxes will be performed, and both inner and outer boxes will be retained.
  • None: If not specified, the default PaddleX official model configuration will be used
None
(3) Process the prediction results. The prediction result of each sample is of `dict` type, and supports operations such as printing, saving as an image, and saving as a `json` file:
Method Description Parameter Parameter Type Parameter Description Default Value
print() Print results to terminal format_json bool Whether to format the output content using JSON indentation True
indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. Effective only when format_json is True False
save_to_json() Save results as a JSON file save_path str Path to save the file. If it is a directory, the saved file will be named the same as the input file type None
indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. Effective only when format_json is True False
save_to_img() Save results as an image file save_path str Path to save the file, supports directory or file path None
- Calling the `print()` method will print the results to the terminal. The content printed to the terminal is explained as follows: - `input_path`: `(str)` The input path of the image to be predicted. - `page_index`: `(Union[int, None])` If the input is a PDF file, this indicates the current page number of the PDF. Otherwise, it is `None` - `model_settings`: `(Dict[str, bool])` The model parameters required for the production line configuration. - `use_doc_preprocessor`: `(bool)` Controls whether to enable the document preprocessing sub-production line. - `use_layout_detection`: `(bool)` Controls whether to enable the layout area detection module. - `doc_preprocessor_res`: `(Dict[str, Union[str, Dict[str, bool], int]])` The output result of the document preprocessing sub-production line. It exists only when `use_doc_preprocessor=True`. - `input_path`: `(Union[str, None])` The image path accepted by the image preprocessing sub-production line. When the input is a `numpy.ndarray`, it is saved as `None`. - `model_settings`: `(Dict)` The model configuration parameters of the preprocessing sub-production line. - `use_doc_orientation_classify`: `(bool)` Controls whether to enable document orientation classification. - `use_doc_unwarping`: `(bool)` Controls whether to enable document distortion correction. - `angle`: `(int)` The prediction result of document orientation classification. When enabled, it takes values from [0,1,2,3], corresponding to [0°,90°,180°,270°]; when disabled, it is -1. - `layout_det_res`: `(Dict[str, List[Dict]])` The output result of the layout area detection module. It exists only when `use_layout_detection=True`. - `input_path`: `(Union[str, None])` The image path accepted by the layout area detection module. When the input is a `numpy.ndarray`, it is saved as `None`. - `boxes`: `(List[Dict[int, str, float, List[float]]])` A list of layout area detection prediction results. - `cls_id`: `(int)` The class ID predicted by layout area detection. - `label`: `(str)` The class label predicted by layout area detection. - `score`: `(float)` The confidence score of the predicted class. - `coordinate`: `(List[float])` The bounding box coordinates predicted by layout area detection, in the format [x_min, y_min, x_max, y_max], where (x_min, y_min) is the top-left corner and (x_max, y_max) is the bottom-right corner. - `formula_res_list`: `(List[Dict[str, int, List[float]]])` A list of formula recognition prediction results. - `rec_formula`: `(str)` The LaTeX source code predicted by formula recognition. - `formula_region_id`: `(int)` The ID number predicted by formula recognition. - `dt_polys`: `(List[float])` The bounding box coordinates predicted by formula recognition, in the format [x_min, y_min, x_max, y_max], where (x_min, y_min) is the top-left corner and (x_max, y_max) is the bottom-right corner. - Calling the `save_to_json()` method will save the above content to the specified `save_path`. If a directory is specified, the saved path will be `save_path/{your_img_basename}_res.json`. If a file is specified, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, `numpy.array` types will be converted to list format. - Calling the `save_to_img()` method will save the visualization results to the specified `save_path`. If a directory is specified, the saved path will be `save_path/{your_img_basename}_formula_res_img.{your_img_extension}`. If a file is specified, it will be saved directly to that file. (The production line usually contains many result images, so it is not recommended to specify a specific file path directly, otherwise multiple images will be overwritten and only the last one will be retained.) * In addition, you can also obtain the visualization image with results and the prediction results through attributes, as follows:
Attribute Attribute Description
json Get the prediction results in json format
img Get the visualization image in dict format
- The prediction results obtained through the `json` attribute are of the dict type, with content consistent with what is saved using the `save_to_json()` method. - The prediction results returned by the `img` attribute are of the dictionary type. The keys are `preprocessed_img`, `layout_det_res`, and `formula_res_img`, corresponding to three `Image.Image` objects: the first one displays the visualization image of image preprocessing, the second one displays the visualization image of layout area detection, and the third one displays the visualization image of formula recognition. If the image preprocessing sub-module is not used, the dictionary does not contain `preprocessed_img`; if the layout area detection sub-module is not used, the dictionary does not contain `layout_det_res`. In addition, you can obtain the configuration file of the formula recognition production line and load the configuration file for prediction. You can execute the following command to save the results in `my_path`: ``` paddlex --get_pipeline_config formula_recognition --save_path ./my_path ``` If you have obtained the configuration file, you can customize the settings for the formula recognition pipeline by simply modifying the value of the `pipeline` parameter in the `create_pipeline` method to the path of the pipeline configuration file. An example is shown below: ```python from paddlex import create_pipeline pipeline = create_pipeline(pipeline="./my_path/formula_recognition.yaml") output = pipeline.predict( input="./general_formula_recognition_001.png", use_layout_detection=True , use_doc_orientation_classify=False, use_doc_unwarping=False, layout_threshold=0.5, layout_nms=True, layout_unclip_ratio=1.0, layout_merge_bboxes_mode="large" ) for res in output: res.print() res.save_to_img(save_path="./output/") res.save_to_json(save_path="./output/") ``` Note: The parameters in the configuration file are initialization parameters for the production line. If you want to change the initialization parameters for the formula recognition production line, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in a configuration file, simply specify the path of the configuration file with `--pipeline`. ## 3. Development Integration/Deployment If the formula recognition production line meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment. If you need to integrate the formula recognition production line into your Python project, you can refer to the example code in [ 2.2 Integration via Python Script](#22-integration-via-python-script). In addition, PaddleX also provides three other deployment methods, which are detailed as follows: 🚀 High-Performance Inference: In actual production environments, many applications have strict performance requirements for deployment strategies, especially in terms of response speed, to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin, which aims to deeply optimize the performance of model inference and pre/post-processing, significantly speeding up the end-to-end process. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference.en.md). ☁️ Service-Based Deployment: Service-based deployment is a common form of deployment in actual production environments. By encapsulating inference capabilities into services, clients can access these services via network requests to obtain inference results. PaddleX supports multiple production line service-based deployment solutions. For detailed production line service-based deployment procedures, please refer to the [PaddleX Service-Based Deployment Guide](../../../pipeline_deploy/serving.en.md). Below are the API references for basic service-based deployment and multi-language service invocation examples:
API Reference

For the main operations provided by the service:

Name Type Meaning
logId string The UUID of the request.
errorCode integer Error code. Fixed as 0.
errorMsg string Error message. Fixed as "Success".
result object The result of the operation.
Name Type Meaning
logId string The UUID of the request.
errorCode integer Error code. Same as the response status code.
errorMsg string Error message.

The main operations provided by the service are as follows:

Get the formula recognition result of an image.

POST /formula-recognition

Name Type Meaning Required
file string The URL of an image or PDF file accessible by the server, or the Base64-encoded content of the above file types. For PDF files exceeding 10 pages, only the content of the first 10 pages will be used. Yes
fileType integer The type of the file. 0 indicates a PDF file, and 1 indicates an image file. If this attribute is missing in the request body, the file type will be inferred from the URL. No
Name Type Meaning
formulaRecResults object The formula recognition result. The length of the array is 1 (for image input) or the smaller of the number of document pages and 10 (for PDF input). For PDF input, each element in the array represents the processing result of each page in the PDF file.
dataInfo object Information about the input data.

Each element in formulaRecResults is an object with the following attributes:

Name Type Meaning
formulas array The positions and contents of the formulas.
inputImage string The input image. The image is in JPEG format and is Base64-encoded.
layoutImage string The layout detection result image. The image is in JPEG format and is Base64-encoded.

Each element in formulas is an object with the following attributes:

Name Type Meaning
poly array The position of the formula. The elements in the array are the coordinates of the vertices of the polygon surrounding the text.
latex string The content of the formula.
Multi-language Service Call Examples
Python
import base64
import requests

API_URL = "http://localhost:8080/formula-recognition"
file_path = "./demo.jpg"

with open(file_path, "rb") as file:
    file_bytes = file.read()
    file_data = base64.b64encode(file_bytes).decode("ascii")

payload = {"file": file_data, "fileType": 1}

response = requests.post(API_URL, json=payload)

assert response.status_code == 200
result = response.json()["result"]
for i, res in enumerate(result["formulaRecResults"]):
    print("Detected formulas:")
    print(res["formulas"])
    layout_img_path = f"layout_{i}.jpg"
    with open(layout_img_path, "wb") as f:
        f.write(base64.b64decode(res["layoutImage"]))
    print(f"Output image saved at {layout_img_path}")

📱 Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing the device to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.en.md). You can choose the appropriate deployment method based on your needs to integrate the model pipeline into subsequent AI applications. ## 4. Secondary Development If the default model weights provided by the formula recognition pipeline do not meet your requirements in terms of accuracy or speed, you can try to fine-tune the existing models using your own domain-specific or application-specific data to improve the recognition performance of the formula recognition pipeline in your scenario. ### 4.1 Model Fine-Tuning Since the formula recognition pipeline consists of several modules, if the pipeline's performance is not satisfactory, the issue may arise from any one of these modules. You can analyze the poorly recognized images to determine which module is problematic and refer to the corresponding fine-tuning tutorial links in the table below for model fine-tuning.
Scenario Fine-Tuning Module Reference Link
Formulas are missing Layout Detection Module Link
Formula content is inaccurate Formula Recognition Module Link
Whole-image rotation correction is inaccurate Document Image Orientation Classification Module Link
Image distortion correction is inaccurate Text Image Correction Module Fine-tuning not supported
### 4.2 Model Application After fine-tuning with your private dataset, you will obtain the local model weight file. If you need to use the fine-tuned model weights, simply modify the pipeline configuration file and replace the local path of the fine-tuned model weights into the corresponding position in the pipeline configuration file: ```yaml ... SubModules: LayoutDetection: module_name: layout_detection model_name: PP-DocLayout-L model_dir: null # Replace with the fine-tuned layout detection model weights path ... FormulaRecognition: module_name: formula_recognition model_name: PP-FormulaNet-L model_dir: null # Replace with the fine-tuned formula recognition model weights path batch_size: 5 SubPipelines: DocPreprocessor: pipeline_name: doc_preprocessor use_doc_orientation_classify: True use_doc_unwarping: True SubModules: DocOrientationClassify: module_name: doc_text_orientation model_name: PP-LCNet_x1_0_doc_ori model_dir: null # Replace with the fine-tuned document image orientation classification model weights path batch_size: 1 ... ``` Then, refer to the command-line or Python script methods in [2. Quick Start](#2-Quick-Start) to load the modified pipeline configuration file. ## 5. Multi-Hardware Support PaddleX supports a variety of mainstream hardware devices, including NVIDIA GPU, Kunlunxin XPU, Ascend NPU, and Cambricon MLU. You can seamlessly switch between different hardware devices by simply modifying the `--device` parameter. For example, if you use Ascend NPU for formula recognition pipeline inference, the CLI command is: ```bash paddlex --pipeline formula_recognition \ --input general_formula_recognition_001.png \ --use_layout_detection True \ --use_doc_orientation_classify False \ --use_doc_unwarping False \ --layout_threshold 0.5 \ --layout_nms True \ --layout_unclip_ratio 1.0 \ --layout_merge_bboxes_mode large \ --save_path ./output \ --device npu:0 ``` Of course, you can also specify the hardware device when calling `create_pipeline()` or `predict()` in a Python script. If you want to use the formula recognition production line on more types of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).