Tingquan Gao 1 год назад
Родитель
Сommit
3ce1002d14

+ 101 - 0
docs/module_usage/instructions/model_python_API.md

@@ -1,3 +1,104 @@
 简体中文 | [English](model_python_API_en.md)
 
+# PaddleX单模型Python脚本使用说明
 
+在使用Python脚本进行单模型快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
+
+## 一、使用示例
+以图像分类模型为例,使用方式如下:
+
+```python
+from paddlex import create_model
+model = create_model("PP-LCNet_x1_0")
+output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
+for res in output:
+    res.print(json_format=False)
+    res.save_to_img("./output/")
+    res.save_to_json("./output/res.json")
+```
+简单来说,只需三步:
+
+* 调用`create_model()`方法实例化预测模型对象;
+* 调用预测模型对象的`predict()`方法进行推理预测;
+* 调用`print()`、`save_to_xxx()`等相关方法对预测结果进行可视化或是保存。
+
+## 二、API说明
+### 1. 调用`create_model()`方法实例化预测模型对象
+* `create_model`:实例化预测模型对象;
+  * 参数:
+    * `model_name`:`str` 类型,模型名或是本地inference模型文件路径,如“PP-LCNet_x1_0”、“/path/to/PP-LCNet_x1_0_infer/”;
+    * `device`:`str` 类型,用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”;
+    * `pp_option`:`PaddlePredictorOption` 类型,用于设置模型推理后端;
+  * 返回值:`BasePredictor`类型。
+### 2. 调用预测模型对象的`predict()`方法进行推理预测
+* `predict`:使用定义的预测模型,对输入数据进行预测;
+  * 参数:
+    * `input`:任意类型,支持str类型表示的待预测数据文件路径,或是包含待预测文件的目录,或是网络URL;对于CV模型,支持numpy.ndarray表示的图像数据;对于TS模型,支持pandas.DataFrame类型数据;同样支持上述类型所构成的list类型;
+  * 返回值:`generator`,每次调用返回一个样本的预测结果;
+### 3. 对预测结果进行可视化
+模型的预测结果支持访问、可视化及保存,可通过相应的属性或方法实现,具体如下:
+#### 属性:
+* `str`:`str` 类型表示的预测结果;
+  * 返回值:`str` 类型,预测结果的str表示;
+* `json`:json格式表示的预测结果;
+  * 返回值:`dict` 类型;
+* `img`:预测结果的可视化图;
+  * 返回值:`PIL.Image` 类型;
+* `html`:预测结果的HTML表示;
+  * 返回值:`str` 类型;
+#### 方法:
+* `print()`:将预测结果输出,需要注意,当预测结果不便于直接输出时,会省略相关内容;
+  * 参数:
+    * `json_format`:`bool`类型,默认为`False`,表示不使用json格式化输出;
+    * `indent`:`int`类型,默认为`4`,当`json_format`为`True`时有效,表示json格式化的类型;
+    * `ensure_ascii`:`bool`类型,默认为`False`,当`json_format`为`True`时有效;
+  * 返回值:无;
+* `save_to_json()`:将预测结果保存为json格式的文件,需要注意,当预测结果包含无法json序列化的数据时,会自动进行格式转换以实现序列化保存;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+    * `indent`:`int`类型,默认为`4`,当`json_format`为`True`时有效,表示json格式化的类型;
+    * `ensure_ascii`:`bool`类型,默认为`False`,当`json_format`为`True`时有效;
+  * 返回值:无;
+* `save_to_img()`:将预测结果可视化并保存为图像;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+* `save_to_csv()`:将预测结果保存为CSV文件;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+* `save_to_html()`:将预测结果保存为HTML文件;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+* `save_to_xlsx()`:将预测结果保存为XLSX文件;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+
+### 4. 推理后端设置
+
+PaddleX 支持通过`PaddlePredictorOption`设置推理后端,相关API如下:
+
+#### 属性:
+
+* `deivce`:推理设备;
+  * 支持设置 `str` 类型表示的推理设备类型及卡号,设备类型支持可选 'gpu', 'cpu', 'npu', 'xpu', 'mlu',当使用加速卡时,支持指定卡号,如使用 0 号 gpu:'gpu:0',默认为 'gpu:0';
+  * 返回值:`str`类型,当前设置的推理设备。
+* `run_mode`:推理后端;
+  * 支持设置 `str` 类型的推理后端,支持可选 'paddle','trt_fp32','trt_fp16','trt_int8','mkldnn','mkldnn_bf16',其中 'mkldnn' 仅当推理设备使用 cpu 时可选,默认为 'paddle';
+  * 返回值:`str`类型,当前设置的推理后端。
+* `cpu_threads`:cpu 加速库计算线程数,仅当推理设备使用 cpu 时有效;
+  * 支持设置 `int` 类型,cpu 推理时加速库计算线程数;
+  * 返回值:`int` 类型,当前设置的加速库计算线程数。
+
+#### 方法:
+* `get_support_run_mode`:获取支持的推理后端设置;
+  * 参数:无;
+  * 返回值:list 类型,可选的推理后端设置。
+* `get_support_device`:获取支持的运行设备类型;
+  * 参数:无;
+  * 返回值:list 类型,可选的设备类型。
+* `get_device`:获取当前设置的设备;
+  * 参数:无;
+  * 返回值:str 类型。

+ 108 - 0
docs/module_usage/instructions/model_python_API_en.md

@@ -1,2 +1,110 @@
 [简体中文](model_python_API.md) | English
 
+# PaddleX Single Model Python Usage Instructions
+
+Before using Python scripts for single model quick inference, please ensure you have completed the installation of PaddleX following the [PaddleX Local Installation Tutorial](../../installation/installation_en.md).
+
+## I. Usage Example
+Taking the image classification model as an example, the usage is as follows:
+
+```python
+from paddlex import create_model
+model = create_model("PP-LCNet_x1_0")
+output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
+for res in output:
+    res.print(json_format=False)
+    res.save_to_img("./output/")
+    res.save_to_json("./output/res.json")
+```
+In short, just three steps:
+
+* Call the `create_model()` method to instantiate the prediction model object;
+* Call the `predict()` method of the prediction model object to perform inference prediction;
+* Call `print()`, `save_to_xxx()` and other related methods to visualize or save the prediction results.
+
+## II. API Description
+
+### 1. Instantiate the Prediction Model Object by Calling the `create_model()` Method
+* `create_model`: Instantiate the prediction model object;
+  * Parameters:
+    * `model_name`: `str` type, model name or local inference model file path, such as "PP-LCNet_x1_0", "/path/to/PP-LCNet_x1_0_infer/";
+    * `device`: `str` type, used to set the model inference device, such as "cpu", "gpu:2" for GPU settings;
+    * `pp_option`: `PaddlePredictorOption` type, used to set the model inference backend;
+  * Return Value: `BasePredictor` type.
+
+### 2. Perform Inference Prediction by Calling the `predict()` Method of the Prediction Model Object
+* `predict`: Use the defined prediction model to predict the input data;
+  * Parameters:
+    * `input`: Any type, supports str type representing the path of the file to be predicted, or a directory containing files to be predicted, or a network URL; for CV models, supports numpy.ndarray representing image data; for TS models, supports pandas.DataFrame type data; also supports list types composed of the above types;
+  * Return Value: `generator`, returns the prediction result of one sample per call;
+
+### 3. Visualize the Prediction Results
+The prediction results support to be accessed, visualized, and saved, which can be achieved through corresponding attributes or methods, specifically as follows:
+
+#### Attributes:
+* `str`: Representation of the prediction result in `str` type;
+  * Returns: A `str` type, the string representation of the prediction result.
+* `json`: The prediction result in JSON format;
+  * Returns: A `dict` type.
+* `img`: The visualization image of the prediction result;
+  * Returns: A `PIL.Image` type.
+* `html`: The HTML representation of the prediction result;
+  * Returns: A `str` type.
+
+#### Methods:
+* `print()`: Outputs the prediction result. Note that when the prediction result is not convenient for direct output, relevant content will be omitted;
+  * Parameters:
+    * `json_format`: `bool` type, default is `False`, indicating that json formatting is not used;
+    * `indent`: `int` type, default is `4`, valid when `json_format` is `True`, indicating the indentation level for json formatting;
+    * `ensure_ascii`: `bool` type, default is `False`, valid when `json_format` is `True`;
+  * Return Value: None;
+* `save_to_json()`: Saves the prediction result as a JSON file. Note that when the prediction result contains data that cannot be serialized in JSON, automatic format conversion will be performed to achieve serialization and saving;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result;
+    * `indent`: `int` type, default is `4`, valid when `json_format` is `True`, indicating the indentation level for json formatting;
+    * `ensure_ascii`: `bool` type, default is `False`, valid when `json_format` is `True`;
+  * Return Value: None;
+* `save_to_img()`: Visualizes the prediction result and saves it as an image;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+* `save_to_csv()`: Saves the prediction result as a CSV file;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+* `save_to_html()`: Saves the prediction result as an HTML file;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+* `save_to_xlsx()`: Saves the prediction result as an XLSX file;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+
+### 4. Inference Backend Configuration
+
+PaddleX supports configuring the inference backend through `PaddlePredictorOption`. Relevant APIs are as follows:
+
+#### Attributes:
+
+* `device`: Inference device;
+  * Supports setting the device type and card number represented by `str`. Device types include 'gpu', 'cpu', 'npu', 'xpu', 'mlu'. When using an accelerator card, you can specify the card number, e.g., 'gpu:0' for GPU 0. The default is 'gpu:0';
+  * Return value: `str` type, the currently set inference device.
+* `run_mode`: Inference backend;
+  * Supports setting the inference backend as a `str` type, options include 'paddle', 'trt_fp32', 'trt_fp16', 'trt_int8', 'mkldnn', 'mkldnn_bf16'. 'mkldnn' is only selectable when the inference device is 'cpu'. The default is 'paddle';
+  * Return value: `str` type, the currently set inference backend.
+* `cpu_threads`: Number of CPU threads for the acceleration library, only valid when the inference device is 'cpu';
+  * Supports setting an `int` type for the number of CPU threads for the acceleration library during CPU inference;
+  * Return value: `int` type, the currently set number of threads for the acceleration library.
+
+#### Methods:
+* `get_support_run_mode`: Get supported inference backend configurations;
+  * Parameters: None;
+  * Return value: List type, the available inference backend configurations.
+* `get_support_device`: Get supported device types for running;
+  * Parameters: None;
+  * Return value: List type, the available device types.
+* `get_device`: Get the currently set device;
+  * Parameters: None;
+  * Return value: `str` type.
+```

+ 60 - 0
docs/pipeline_usage/instructions/pipeline_CLI_usage.md

@@ -0,0 +1,60 @@
+简体中文 | [English](pipeline_CLI_usage_en.md)
+
+# PaddleX模型产线CLI命令行使用说明
+
+在使用CLI命令行进行模型产线快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
+
+## 一、使用示例
+
+### 1. 快速体验
+
+以图像分类产线为例,使用方式如下:
+
+```bash
+paddlex --pipeline image_classification \
+        --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
+        --device gpu:0 \
+        --save_path ./output/
+```
+只需一步就能完成推理预测并保存预测结果,相关参数说明如下:
+
+* `pipeline`:模型产线名称或是模型产线配置文件的本地路径,如模型产线名“image_classification”,或模型产线配置文件路径“path/to/image_classification.yaml”;
+* `input`:待预测数据文件路径,支持本地文件路径、包含待预测数据文件的本地目录、文件URL链接;
+* `device`:用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”,当不传入时,如有GPU设置则使用GPU,否则使用CPU;
+* `save_path`:预测结果的保存路径,当不传入时,则不保存预测结果;
+
+### 2. 自定义产线配置
+
+如需对产线配置进行修改,可获取配置文件后进行修改,仍以图像分类产线为例,获取配置文件方式如下:
+
+```bash
+paddlex --get_pipeline_config image_classification
+
+# Please enter the path that you want to save the pipeline config file: (default `./`)
+./configs/
+
+# The pipeline config has been saved to: configs/image_classification.yaml
+```
+
+然后可修改产线配置文件`configs/image_classification.yaml`,如图像分类配置文件内容为:
+
+```yaml
+Global:
+  pipeline_name: image_classification
+  input: https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg
+
+Pipeline:
+  model: PP-LCNet_x0_5
+  batch_size: 1
+  device: "gpu:0"
+```
+
+在修改完成后,即可使用该配置文件进行模型产线推理预测,方式如下:
+
+```bash
+paddlex --pipeline configs/image_classification.yaml \
+        --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
+        --save_path ./output/
+
+# {'input_path': '/root/.paddlex/predict_input/general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': array([0.62817, 0.03729, 0.03262, 0.03247, 0.03196]), 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}
+```

+ 60 - 0
docs/pipeline_usage/instructions/pipeline_CLI_usage_en.md

@@ -0,0 +1,60 @@
+[简体中文](pipeline_CLI_usage.md) | English
+
+# PaddleX Pipeline CLI Usage Instructions
+
+Before using the CLI command line for rapid inference of the pipeline, please ensure that you have completed the installation of PaddleX according to the [PaddleX Local Installation Tutorial](../../installation/installation_en.md).
+
+## I. Usage Example
+
+### 1. Quick Experience
+
+Taking the image classification pipeline as an example, the usage is as follows:
+
+```bash
+paddlex --pipeline image_classification \
+        --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
+        --device gpu:0 \
+        --save_path ./output/
+```
+This single step completes the inference prediction and saves the prediction results. Explanations for the relevant parameters are as follows:
+
+* `pipeline`: The name of the pipeline or the local path to the pipeline configuration file, such as the pipeline name "image_classification", or the path to the pipeline configuration file "path/to/image_classification.yaml";
+* `input`: The path to the data file to be predicted, supporting local file paths, local directories containing data files to be predicted, and file URL links;
+* `device`: Used to set the model inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". When not specified, if GPU is available, it will be used; otherwise, CPU will be used;
+* `save_path`: The save path for prediction results. When not specified, the prediction results will not be saved;
+
+### 2. Custom Pipeline Configuration
+
+If you need to modify the pipeline configuration, you can retrieve the configuration file and modify it. Still taking the image classification pipeline as an example, the way to retrieve the configuration file is as follows:
+
+```bash
+paddlex --get_pipeline_config image_classification
+
+# Please enter the path that you want to save the pipeline config file: (default `./`)
+./configs/
+
+# The pipeline config has been saved to: configs/image_classification.yaml
+```
+
+After modifying the production line configuration file `configs/image_classification.yaml`, such as the content for the image classification configuration file:
+
+```yaml
+Global:
+  pipeline_name: image_classification
+  input: https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg
+
+Pipeline:
+  model: PP-LCNet_x0_5
+  batch_size: 1
+  device: "gpu:0"
+```
+
+Once the modification is completed, you can use this configuration file to perform model pipeline inference prediction as follows:
+
+```bash
+paddlex --pipeline configs/image_classification.yaml \
+        --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
+        --save_path ./output/
+
+# {'input_path': '/root/.paddlex/predict_input/general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': array([0.62817, 0.03729, 0.03262, 0.03247, 0.03196]), 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}
+```

+ 106 - 0
docs/pipeline_usage/instructions/pipeline_python_API.md

@@ -0,0 +1,106 @@
+简体中文 | [English](pipeline_python_API_en.md)
+
+# PaddleX模型产线Python脚本使用说明
+
+在使用Python脚本进行模型产线快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
+
+## 一、使用示例
+
+以图像分类产线为例,使用方式如下:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline("image_classification")
+output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
+for res in output:
+    res.print(json_format=False)
+    res.save_to_img("./output/")
+    res.save_to_json("./output/res.json")
+```
+简单来说,只需三步:
+
+* 调用`create_pipeline()`方法实例化预测模型产线对象;
+* 调用预测模型产线对象的`predict()`方法进行推理预测;
+* 调用`print()`、`save_to_xxx()`等相关方法对预测结果进行可视化或是保存。
+
+## 二、API说明
+
+### 1. 调用`create_pipeline()`方法实例化预测模型产线对象
+* `create_pipeline`:实例化预测模型产线对象;
+  * 参数:
+    * `pipeline_name`:`str` 类型,产线名或是本地产线配置文件路径,如“image_classification”、“/path/to/image_classification.yaml”;
+    * `device`:`str` 类型,用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”;
+    * `pp_option`:`PaddlePredictorOption` 类型,用于设置模型推理后端;
+  * 返回值:`BasePredictor`类型。
+### 2. 调用预测模型产线对象的`predict()`方法进行推理预测
+* `predict`:使用定义的预测模型产线,对输入数据进行预测;
+  * 参数:
+    * `input`:任意类型,支持str类型表示的待预测数据文件路径,或是包含待预测文件的目录,或是网络URL;对于CV任务,支持numpy.ndarray表示的图像数据;对于TS任务,支持pandas.DataFrame类型数据;同样支持上述类型所构成的list类型;
+  * 返回值:`generator`,每次调用返回一个样本的预测结果;
+### 3. 对预测结果进行可视化
+模型产线的预测结果支持访问、可视化及保存,可通过相应的属性或方法实现,具体如下:
+#### 属性:
+* `str`:`str` 类型表示的预测结果;
+  * 返回值:`str` 类型,预测结果的str表示;
+* `json`:json格式表示的预测结果;
+  * 返回值:`dict` 类型;
+* `img`:预测结果的可视化图;
+  * 返回值:`PIL.Image` 类型;
+* `html`:预测结果的HTML表示;
+  * 返回值:`str` 类型;
+#### 方法:
+* `print()`:将预测结果输出,需要注意,当预测结果不便于直接输出时,会省略相关内容;
+  * 参数:
+    * `json_format`:`bool`类型,默认为`False`,表示不使用json格式化输出;
+    * `indent`:`int`类型,默认为`4`,当`json_format`为`True`时有效,表示json格式化的类型;
+    * `ensure_ascii`:`bool`类型,默认为`False`,当`json_format`为`True`时有效;
+  * 返回值:无;
+* `save_to_json()`:将预测结果保存为json格式的文件,需要注意,当预测结果包含无法json序列化的数据时,会自动进行格式转换以实现序列化保存;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+    * `indent`:`int`类型,默认为`4`,当`json_format`为`True`时有效,表示json格式化的类型;
+    * `ensure_ascii`:`bool`类型,默认为`False`,当`json_format`为`True`时有效;
+  * 返回值:无;
+* `save_to_img()`:将预测结果可视化并保存为图像;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+* `save_to_csv()`:将预测结果保存为CSV文件;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+* `save_to_html()`:将预测结果保存为HTML文件;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+* `save_to_xlsx()`:将预测结果保存为XLSX文件;
+  * 参数:
+    * `save_path`:`str`类型,结果保存的路径;
+  * 返回值:无;
+
+### 4. 推理后端设置
+
+PaddleX 支持通过`PaddlePredictorOption`设置推理后端,相关API如下:
+
+#### 属性:
+
+* `deivce`:推理设备;
+  * 支持设置 `str` 类型表示的推理设备类型及卡号,设备类型支持可选 'gpu', 'cpu', 'npu', 'xpu', 'mlu',当使用加速卡时,支持指定卡号,如使用 0 号 gpu:'gpu:0',默认为 'gpu:0';
+  * 返回值:`str`类型,当前设置的推理设备。
+* `run_mode`:推理后端;
+  * 支持设置 `str` 类型的推理后端,支持可选 'paddle','trt_fp32','trt_fp16','trt_int8','mkldnn','mkldnn_bf16',其中 'mkldnn' 仅当推理设备使用 cpu 时可选,默认为 'paddle';
+  * 返回值:`str`类型,当前设置的推理后端。
+* `cpu_threads`:cpu 加速库计算线程数,仅当推理设备使用 cpu 时有效;
+  * 支持设置 `int` 类型,cpu 推理时加速库计算线程数;
+  * 返回值:`int` 类型,当前设置的加速库计算线程数。
+
+#### 方法:
+* `get_support_run_mode`:获取支持的推理后端设置;
+  * 参数:无;
+  * 返回值:list 类型,可选的推理后端设置。
+* `get_support_device`:获取支持的运行设备类型;
+  * 参数:无;
+  * 返回值:list 类型,可选的设备类型。
+* `get_device`:获取当前设置的设备;
+  * 参数:无;
+  * 返回值:str 类型。

+ 123 - 0
docs/pipeline_usage/instructions/pipeline_python_API_en.md

@@ -0,0 +1,123 @@
+[简体中文](pipeline_python_API.md) | English
+
+# PaddleX Model Pipeline Python Usage Instructions
+
+Before using Python scripts for rapid inference on model pipelines, please ensure you have installed PaddleX following the [PaddleX Local Installation Guide](../../installation/installation_en.md).
+
+## I. Usage Example
+Taking the image classification pipeline as an example, the usage is as follows:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline("image_classification")
+output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
+for res in output:
+    res.print(json_format=False)
+    res.save_to_img("./output/")
+    res.save_to_json("./output/res.json")
+```
+In short, there are only three steps:
+
+* Call the `create_pipeline()` method to instantiate the prediction model pipeline object;
+* Call the `predict()` method of the prediction model pipeline object for inference;
+* Call `print()`, `save_to_xxx()` and other related methods to visualize or save the prediction results.
+
+## II. API Description
+
+### 1. Instantiate the Prediction Model Pipeline Object by Calling `create_pipeline()`
+* `create_pipeline`: Instantiates the prediction model pipeline object;
+  * Parameters:
+    * `pipeline_name`: `str` type, the pipeline name or the local pipeline configuration file path, such as "image_classification", "/path/to/image_classification.yaml";
+    * `device`: `str` type, used to set the model inference device, such as "cpu" or "gpu:2" for GPU settings;
+    * `pp_option`: `PaddlePredictorOption` type, used to set the model inference backend;
+  * Return Value: `BasePredictor` type.
+
+### 2. Perform Inference by Calling the `predict()` Method of the Prediction Model Pipeline Object
+* `predict`: Uses the defined prediction model pipeline to predict input data;
+  * Parameters:
+    * `input`: Any type, supporting str representing the path of the file to be predicted, or a directory containing files to be predicted, or a network URL; for CV tasks, supports numpy.ndarray representing image data; for TS tasks, supports pandas.DataFrame type data; also supports lists of the above types;
+  * Return Value: `generator`, returns the prediction result of one sample per call;
+
+### 3. Visualize the Prediction Results
+The prediction results of the model pipeline support access, visualization, and saving, which can be achieved through corresponding attributes or methods, specifically as follows:
+
+#### Attributes:
+* `str`: `str` type representation of the prediction result;
+  * Return Value: `str` type, string representation of the prediction result;
+* `json`: Prediction result in JSON format;
+  * Return Value: `dict` type;
+* `img`: Visualization image of the prediction result;
+  * Return Value: `PIL.Image` type;
+* `html`: HTML representation of the prediction result;
+  * Return Value: `str` type;
+
+### 3. Visualize the Prediction Results
+The prediction results support to be accessed, visualized, and saved, which can be achieved through corresponding attributes or methods, specifically as follows:
+
+#### Attributes:
+* `str`: Representation of the prediction result in `str` type;
+  * Returns: A `str` type, the string representation of the prediction result.
+* `json`: The prediction result in JSON format;
+  * Returns: A `dict` type.
+* `img`: The visualization image of the prediction result;
+  * Returns: A `PIL.Image` type.
+* `html`: The HTML representation of the prediction result;
+  * Returns: A `str` type.
+
+#### Methods:
+* `print()`: Outputs the prediction result. Note that when the prediction result is not convenient for direct output, relevant content will be omitted;
+  * Parameters:
+    * `json_format`: `bool` type, default is `False`, indicating that json formatting is not used;
+    * `indent`: `int` type, default is `4`, valid when `json_format` is `True`, indicating the indentation level for json formatting;
+    * `ensure_ascii`: `bool` type, default is `False`, valid when `json_format` is `True`;
+  * Return Value: None;
+* `save_to_json()`: Saves the prediction result as a JSON file. Note that when the prediction result contains data that cannot be serialized in JSON, automatic format conversion will be performed to achieve serialization and saving;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result;
+    * `indent`: `int` type, default is `4`, valid when `json_format` is `True`, indicating the indentation level for json formatting;
+    * `ensure_ascii`: `bool` type, default is `False`, valid when `json_format` is `True`;
+  * Return Value: None;
+* `save_to_img()`: Visualizes the prediction result and saves it as an image;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+* `save_to_csv()`: Saves the prediction result as a CSV file;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+* `save_to_html()`: Saves the prediction result as an HTML file;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+* `save_to_xlsx()`: Saves the prediction result as an XLSX file;
+  * Parameters:
+    * `save_path`: `str` type, the path to save the result.
+  * Returns: None.
+
+### 4. Inference Backend Configuration
+
+PaddleX supports configuring the inference backend through `PaddlePredictorOption`. Relevant APIs are as follows:
+
+#### Attributes:
+
+* `device`: Inference device;
+  * Supports setting the device type and card number represented by `str`. Device types include 'gpu', 'cpu', 'npu', 'xpu', 'mlu'. When using an accelerator card, you can specify the card number, e.g., 'gpu:0' for GPU 0. The default is 'gpu:0';
+  * Return value: `str` type, the currently set inference device.
+* `run_mode`: Inference backend;
+  * Supports setting the inference backend as a `str` type, options include 'paddle', 'trt_fp32', 'trt_fp16', 'trt_int8', 'mkldnn', 'mkldnn_bf16'. 'mkldnn' is only selectable when the inference device is 'cpu'. The default is 'paddle';
+  * Return value: `str` type, the currently set inference backend.
+* `cpu_threads`: Number of CPU threads for the acceleration library, only valid when the inference device is 'cpu';
+  * Supports setting an `int` type for the number of CPU threads for the acceleration library during CPU inference;
+  * Return value: `int` type, the currently set number of threads for the acceleration library.
+
+#### Methods:
+* `get_support_run_mode`: Get supported inference backend configurations;
+  * Parameters: None;
+  * Return value: List type, the available inference backend configurations.
+* `get_support_device`: Get supported device types for running;
+  * Parameters: None;
+  * Return value: List type, the available device types.
+* `get_device`: Get the currently set device;
+  * Parameters: None;
+  * Return value: `str` type.
+```

+ 19 - 14
docs/pipeline_usage/pipeline_develop_guide.md

@@ -38,11 +38,11 @@ PaddleX提供了三种可以快速体验产线效果的方式,您可以根据
 * 在线快速体验地址:[PaddleX产线列表(CPU/GPU)](../support_list/pipelines_list.md)
 * 命令行快速体验:[PaddleX产线命令行使用说明](../pipeline_usage/instructions/pipeline_CLI_usage.md)
 * Python脚本快速体验:[PaddleX产线Python脚本使用说明](../pipeline_usage/instructions/pipeline_python_API.md)
-* 
+
 以实现登机牌识别任务的通用OCR产线为例,一行命令即可快速体验产线效果:
 
 ```bash
-paddlex --pipeline OCR --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --device gpu:0
+paddlex --pipeline OCR --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --device gpu:0  --save ./output/
 ```
 参数说明:
 
@@ -51,27 +51,32 @@ paddlex --pipeline OCR --input https://paddle-model-ecology.bj.bcebos.com/paddle
 --input:待处理的输入图片的本地路径或URL
 --device 使用的GPU序号(例如gpu:0表示使用第0块GPU,gpu:1,2表示使用第1、2块GPU),也可选择使用CPU(--device cpu)
 ```
-执行后,将提示选择 OCR 产线配置文件保存路径,默认保存至*当前目录*,也可 *自定义路径*。[@郜廷权](https://ku.baidu-int.com?t=mention&mt=contact&id=0e9e1070-7ca6-11ef-928f-85a316a9b6e7)
 
-此外,也可在执行命令时加入 `-y` 参数,则可跳过路径选择,直接将产线配置文件保存至当前目录。
+如需对产线配置进行修改,可获取配置文件后进行修改,获取配置文件方式如下:
+
+```bash
+paddlex --get_pipeline_config OCR
+```
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./ocr.yaml`,只需执行:
 
 ```bash
-paddlex --pipeline ./ocr.yaml --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png
+paddlex --pipeline ./ocr.yaml --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --save ./output/
 ```
-其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
+其中,`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
 运行后,得到的结果为:
 
 ```bash
-The prediction result is:
-['登机口于起飞前10分钟关闭']
-The prediction result is:
-['GATES CLOSE 1O MINUTESBEFORE DEPARTURE TIME']
-The prediction result is:
-['ETKT7813699238489/1']
-......
+{'input_path': '/root/.paddlex/predict_input/general_ocr_002.png', 'dt_polys': [array([[ 6, 13],
+       [64, 13],
+       [64, 31],
+       [ 6, 31]], dtype=int16), array([[210,  14],
+       [238,  14],
+        ......
+       [830, 445],
+       [830, 464],
+       [338, 473]], dtype=int16)], 'dt_scores': [0.7629529090100092, 0.7717284653547034, 0.7139251666762622, 0.8057611181556994, 0.8840947658872964, 0.793295938183885, 0.8342027855884783, 0.8081378522874861, 0.8436969344212185, 0.8500845646497226, 0.7932189714842249, 0.8875924621248228, 0.8827884273639948, 0.8322404317386042, 0.8614796803023563, 0.8804252994596097, 0.9069978945305474, 0.8383917914190059, 0.8495824076580516, 0.8825556800041383, 0.852788927706737, 0.8379584696974435, 0.8633519228646618, 0.763234473595298, 0.8602154244410916, 0.9206341882426813, 0.6341425973804049, 0.8490156149797171, 0.758314821564747, 0.8757849788793592, 0.772485060565334, 0.8404023012596349, 0.8190037953773427, 0.851908529295617, 0.6126112758079643, 0.7324388418218587], 'rec_text': ['www.9', '5', '登机牌', 'BOARDING', 'PASS', '舱位', '', 'CLASS', '序号SERIALNO', '座位号', 'SEAT NO', '航班 FLIGHT', '日期 DATE', '03DEC', 'W', '035', 'MU 2379', '始发地', 'FROM', '登机口', 'GATE', '登机时间BDT', '目的地TO', '福州', 'TAIYUAN', 'G11', 'FUZHOU', '身份识别IDNO', '姓名NAME', 'ZHANGQIWEI', '票号TKTNO', '张祺伟', '票价FARE', 'ETKT7813699238489/1', '登机口于起飞前10分钟关闭', 'GATES CLOSE 1O MINUTESBEFOREDEPARTURE TIME'], 'rec_score': [0.683099627494812, 0.23417049646377563, 0.9969978928565979, 0.9945957660675049, 0.9787729382514954, 0.9983421564102173, 0.0, 0.9896272420883179, 0.9927973747253418, 0.9976049065589905, 0.9330753684043884, 0.9562691450119019, 0.9312669038772583, 0.9749765396118164, 0.9749416708946228, 0.9988260865211487, 0.9319792985916138, 0.9979889988899231, 0.9956836700439453, 0.9991750717163086, 0.9938803315162659, 0.9982991218566895, 0.9701204299926758, 0.9986245632171631, 0.9888408780097961, 0.9793729782104492, 0.9952947497367859, 0.9945247173309326, 0.9919753670692444, 0.991995632648468, 0.9937331080436707, 0.9963390827178955, 0.9954304695129395, 0.9934715628623962, 0.9974429607391357, 0.9529641270637512]}
 ```
 可视化结果如下:
 
@@ -153,4 +158,4 @@ for res in output:
 | 通用表格识别       | [通用表格识别产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/ocr_pipelies/table_recognition.md) |
 | 通用时序预测       | [通用时序预测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md) |
 | 通用时序异常检测   | [通用时序异常检测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md) |
-| 通用时序分类       | [通用时序分类产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md) |
+| 通用时序分类       | [通用时序分类产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md) |

Разница между файлами не показана из-за своего большого размера
+ 9 - 5
docs/pipeline_usage/pipeline_develop_guide_en.md


Некоторые файлы не были показаны из-за большого количества измененных файлов