Эх сурвалжийг харах

fix doc for new API (#3284)

* fix doc for new API

* add DCU

* fix en doc for new API
Tingquan Gao 9 сар өмнө
parent
commit
4a36f76df1

+ 19 - 0
docs/module_usage/instructions/config_parameters_common.en.md

@@ -5,6 +5,7 @@ comments: true
 # PaddleX Common Model Configuration File Parameter Explanation
 
 # Global
+
 <table>
 <thead>
 <tr>
@@ -47,7 +48,9 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # CheckDataset
+
 <table>
 <thead>
 <tr>
@@ -102,7 +105,9 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Train
+
 <table>
 <thead>
 <tr>
@@ -175,7 +180,9 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Evaluate
+
 <table>
 <thead>
 <tr>
@@ -200,7 +207,9 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Export
+
 <table>
 <thead>
 <tr>
@@ -219,7 +228,9 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Predict
+
 <table>
 <thead>
 <tr>
@@ -248,5 +259,13 @@ comments: true
 <td>Path to the prediction input</td>
 <td>The prediction input path specified in the YAML file</td>
 </tr>
+
+<tr>
+<td>kernel_option</td>
+<td>dict</td>
+<td>Path to the prediction input</td>
+<td>Inference engine setting, such as: "run_mode: paddle"</td>
+</tr>
+
 </tbody>
 </table>

+ 13 - 0
docs/module_usage/instructions/config_parameters_common.md

@@ -47,6 +47,7 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # CheckDataset
 <table>
 <thead>
@@ -102,6 +103,7 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Train
 <table>
 <thead>
@@ -175,6 +177,7 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Evaluate
 <table>
 <thead>
@@ -200,6 +203,7 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Export
 <table>
 <thead>
@@ -219,6 +223,7 @@ comments: true
 </tr>
 </tbody>
 </table>
+
 # Predict
 <table>
 <thead>
@@ -248,5 +253,13 @@ comments: true
 <td>预测输入路径</td>
 <td>yaml文件中指定的预测输入路径</td>
 </tr>
+
+<tr>
+<td>kernel_option</td>
+<td>dict</td>
+<td>推理引擎设置,如:“run_mode: paddle”</td>
+<td></td>
+</tr>
+
 </tbody>
 </table>

+ 24 - 13
docs/module_usage/instructions/model_python_API.en.md

@@ -7,11 +7,12 @@ comments: true
 Before using Python scripts for single model quick inference, please ensure you have completed the installation of PaddleX following the [PaddleX Local Installation Tutorial](../../installation/installation.en.md).
 
 ## I. Usage Example
+
 Taking the image classification model as an example, the usage is as follows:
 
 ```python
 from paddlex import create_model
-model = create_model("PP-LCNet_x1_0")
+model = create_model(model_name="PP-LCNet_x1_0")
 output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
 for res in output:
     res.print(json_format=False)
@@ -22,38 +23,47 @@ In short, just three steps:
 
 * Call the `create_model()` method to instantiate the prediction model object;
 * Call the `predict()` method of the prediction model object to perform inference prediction;
-* Call `print()`, `save_to_xxx()` and other related methods to visualize or save the prediction results.
+* Call `print()`, `save_to_xxx()` and other related methods to print or save the prediction results.
 
 ## II. API Description
 
 ### 1. Instantiate the Prediction Model Object by Calling the `create_model()` Method
+
 * `create_model`: Instantiate the prediction model object;
   * Parameters:
-    * `model_name`: `str` type, model name or local inference model file path, such as "PP-LCNet_x1_0", "/path/to/PP-LCNet_x1_0_infer/";
-    * `device`: `str` type, used to set the model inference device, such as "cpu", "gpu:2" for GPU settings;
-    * `pp_option`: `PaddlePredictorOption` type, used to set the model inference backend;
+    * `model_name`: `str` type, model name, such as "PP-LCNet_x1_0", "/path/to/PP-LCNet_x1_0_infer/";
+    * `model_dir`: `str` type, local path to directory of inference model files ,such as "/path/to/PP-LCNet_x1_0_infer/", default to `None`, means that use the official model specified by `model_name`;
+    * `batch_size`: `int` type, default to `1`;
+    * `device`: `str` type, used to set the inference device, such as "cpu", "gpu:2" for GPU settings. By default, using 0 id GPU if available, otherwise CPU;
+    * `pp_option`: `PaddlePredictorOption` type, used to set the inference engine. Please refer to [4-Inference Backend Configuration](#4-inference-backend-configuration) for more details;
+    * _`inference hyperparameters`_: used to set common inference hyperparameters. Please refer to specific model description document for details.
   * Return Value: `BasePredictor` type.
 
 ### 2. Perform Inference Prediction by Calling the `predict()` Method of the Prediction Model Object
+
 * `predict`: Use the defined prediction model to predict the input data;
   * Parameters:
     * `input`: Any type, supports str type representing the path of the file to be predicted, or a directory containing files to be predicted, or a network URL; for CV models, supports numpy.ndarray representing image data; for TS models, supports pandas.DataFrame type data; also supports list types composed of the above types;
-  * Return Value: `generator`, returns the prediction result of one sample per call;
+  * Return Value: `generator`, using `for-in` or `next()` to iterate, and the prediction result of one sample would be returned per call.
 
 ### 3. Visualize the Prediction Results
+
 The prediction results support to be accessed, visualized, and saved, which can be achieved through corresponding attributes or methods, specifically as follows:
 
 #### Attributes:
+
 * `str`: Representation of the prediction result in `str` type;
   * Returns: A `str` type, the string representation of the prediction result.
 * `json`: The prediction result in JSON format;
   * Returns: A `dict` type.
-* `img`: The visualization image of the prediction result;
+* `img`: The visualization image of the prediction result. Available only when the results support visual representation;
   * Returns: A `PIL.Image` type.
-* `html`: The HTML representation of the prediction result;
+* `html`: The HTML representation of the prediction result. Available only when the results support representation in HTML format;
   * Returns: A `str` type.
+* _`more attrs`_: The prediction result of different models support different representation methods. Please refer to the specific model tutorial documentation for details.
 
 #### Methods:
+
 * `print()`: Outputs the prediction result. Note that when the prediction result is not convenient for direct output, relevant content will be omitted;
   * Parameters:
     * `json_format`: `bool` type, default is `False`, indicating that json formatting is not used;
@@ -66,19 +76,19 @@ The prediction results support to be accessed, visualized, and saved, which can
     * `indent`: `int` type, default is `4`, valid when `json_format` is `True`, indicating the indentation level for json formatting;
     * `ensure_ascii`: `bool` type, default is `False`, valid when `json_format` is `True`;
   * Return Value: None;
-* `save_to_img()`: Visualizes the prediction result and saves it as an image;
+* `save_to_img()`: Visualizes the prediction result and saves it as an image. Available only when the results support representation in the form of images;
   * Parameters:
     * `save_path`: `str` type, the path to save the result.
   * Returns: None.
-* `save_to_csv()`: Saves the prediction result as a CSV file;
+* `save_to_csv()`: Saves the prediction result as a CSV file. Available only when the results support representation in CSV format;
   * Parameters:
     * `save_path`: `str` type, the path to save the result.
   * Returns: None.
-* `save_to_html()`: Saves the prediction result as an HTML file;
+* `save_to_html()`: Saves the prediction result as an HTML file. Available only when the results support representation in HTML format;
   * Parameters:
     * `save_path`: `str` type, the path to save the result.
   * Returns: None.
-* `save_to_xlsx()`: Saves the prediction result as an XLSX file;
+* `save_to_xlsx()`: Saves the prediction result as an XLSX file. Available only when the results support representation in XLSX format;
   * Parameters:
     * `save_path`: `str` type, the path to save the result.
   * Returns: None.
@@ -90,7 +100,7 @@ PaddleX supports configuring the inference backend through `PaddlePredictorOptio
 #### Attributes:
 
 * `device`: Inference device;
-  * Supports setting the device type and card number represented by `str`. Device types include 'gpu', 'cpu', 'npu', 'xpu', 'mlu'. When using an accelerator card, you can specify the card number, e.g., 'gpu:0' for GPU 0. The default is 'gpu:0';
+  * Supports setting the device type and card number represented by `str`. Device types include 'gpu', 'cpu', 'npu', 'xpu', 'mlu', 'dcu'. When using an accelerator card, you can specify the card number, e.g., 'gpu:0' for GPU 0. By default, using 0 id GPU if available, otherwise CPU;
   * Return value: `str` type, the currently set inference device.
 * `run_mode`: Inference backend;
   * Supports setting the inference backend as a `str` type, options include 'paddle', 'trt_fp32', 'trt_fp16', 'trt_int8', 'mkldnn', 'mkldnn_bf16'. 'mkldnn' is only selectable when the inference device is 'cpu'. The default is 'paddle';
@@ -100,6 +110,7 @@ PaddleX supports configuring the inference backend through `PaddlePredictorOptio
   * Return value: `int` type, the currently set number of threads for the acceleration library.
 
 #### Methods:
+
 * `get_support_run_mode`: Get supported inference backend configurations;
   * Parameters: None;
   * Return value: List type, the available inference backend configurations.

+ 33 - 15
docs/module_usage/instructions/model_python_API.md

@@ -7,48 +7,64 @@ comments: true
 在使用Python脚本进行单模型快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
 
 ## 一、使用示例
+
 以图像分类模型为例,使用方式如下:
 
 ```python
 from paddlex import create_model
-model = create_model("PP-LCNet_x1_0")
+model = create_model(model_name="PP-LCNet_x1_0")
 output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
 for res in output:
     res.print(json_format=False)
     res.save_to_img("./output/")
     res.save_to_json("./output/res.json")
 ```
+
 简单来说,只需三步:
 
 * 调用`create_model()`方法实例化预测模型对象;
 * 调用预测模型对象的`predict()`方法进行推理预测;
-* 调用`print()`、`save_to_xxx()`等相关方法对预测结果进行可视化或是保存。
+* 调用`print()`、`save_to_xxx()`等相关方法对预测结果进行打印输出或是保存。
 
 ## 二、API说明
+
 ### 1. 调用`create_model()`方法实例化预测模型对象
+
 * `create_model`:实例化预测模型对象;
   * 参数:
-    * `model_name`:`str` 类型,模型名或是本地inference模型文件路径,如“PP-LCNet_x1_0”、“/path/to/PP-LCNet_x1_0_infer/”;
-    * `device`:`str` 类型,用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”;
-    * `pp_option`:`PaddlePredictorOption` 类型,用于设置模型推理后端;
-  * 返回值:`BasePredictor`类型。
+    * `model_name`:`str` 类型,模型名,如“PP-LCNet_x1_0”;
+    * `model_dir`:`str` 类型,本地 inference 模型文件目录路径,如“/path/to/PP-LCNet_x1_0_infer/”,默认为 `None`,表示使用`model_name`指定的官方推理模型;
+    * `batch_size`:`int` 类型,默认为 `1`;
+    * `device`:`str` 类型,用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
+    * `pp_option`:`PaddlePredictorOption` 类型,用于设置模型推理后端,关于推理后端的详细说明,请参考下文[4-推理后端设置](#4-推理后端设置);
+    * _`推理超参数`_:支持常见推理超参数的修改,具体参数说明详见具体模型文档;
+  * 返回值:`BasePredictor` 类型。
+
 ### 2. 调用预测模型对象的`predict()`方法进行推理预测
+
 * `predict`:使用定义的预测模型,对输入数据进行预测;
   * 参数:
     * `input`:任意类型,支持str类型表示的待预测数据文件路径,或是包含待预测文件的目录,或是网络URL;对于CV模型,支持numpy.ndarray表示的图像数据;对于TS模型,支持pandas.DataFrame类型数据;同样支持上述类型所构成的list类型;
-  * 返回值:`generator`,每次调用返回一个样本的预测结果;
+  * 返回值:`generator`,需通过`for-in`或`next()`方式进行遍历,每次访问返回一个样本的预测结果;
+
 ### 3. 对预测结果进行可视化
-模型的预测结果支持访问、可视化及保存,可通过相应的属性或方法实现,具体如下:
+
+模型的预测结果支持直接访问与保存等操作,可通过相应的属性或方法实现,具体如下:
+
 #### 属性:
+
 * `str`:`str` 类型表示的预测结果;
   * 返回值:`str` 类型,预测结果的str表示;
 * `json`:json格式表示的预测结果;
   * 返回值:`dict` 类型;
-* `img`:预测结果的可视化图;
+* `img`:预测结果的可视化图,仅当该模型预测结果支持可视化表示时可用
   * 返回值:`PIL.Image` 类型;
-* `html`:预测结果的HTML表示;
+* `html`:预测结果的HTML表示,仅当该模型预测结果支持以HTML形式表示时可用
   * 返回值:`str` 类型;
+* _`更多`_:不同模型的预测结果支持不同的表示方式,更多属性请参考具体模型文档;
+
 #### 方法:
+
 * `print()`:将预测结果输出,需要注意,当预测结果不便于直接输出时,会省略相关内容;
   * 参数:
     * `json_format`:`bool`类型,默认为`False`,表示不使用json格式化输出;
@@ -61,22 +77,23 @@ for res in output:
     * `indent`:`int`类型,默认为`4`,当`json_format`为`True`时有效,表示json格式化的类型;
     * `ensure_ascii`:`bool`类型,默认为`False`,当`json_format`为`True`时有效;
   * 返回值:无;
-* `save_to_img()`:将预测结果可视化并保存为图像;
+* `save_to_img()`:将预测结果可视化并保存为图像,仅当该模型预测结果支持以图像形式表示时可用
   * 参数:
     * `save_path`:`str`类型,结果保存的路径;
   * 返回值:无;
-* `save_to_csv()`:将预测结果保存为CSV文件;
+* `save_to_csv()`:将预测结果保存为CSV文件,仅当该模型预测结果支持以CSV形式表示时可用
   * 参数:
     * `save_path`:`str`类型,结果保存的路径;
   * 返回值:无;
-* `save_to_html()`:将预测结果保存为HTML文件;
+* `save_to_html()`:将预测结果保存为HTML文件,仅当该模型预测结果支持以HTML形式表示时可用
   * 参数:
     * `save_path`:`str`类型,结果保存的路径;
   * 返回值:无;
-* `save_to_xlsx()`:将预测结果保存为XLSX文件;
+* `save_to_xlsx()`:将预测结果保存为XLSX文件,仅当该模型预测结果支持以XLSX形式表示时可用
   * 参数:
     * `save_path`:`str`类型,结果保存的路径;
   * 返回值:无;
+* _`更多`_:不同模型的预测结果支持不同的存储方式,更多方法请参考具体模型文档;
 
 ### 4. 推理后端设置
 
@@ -85,7 +102,7 @@ PaddleX 支持通过`PaddlePredictorOption`设置推理后端,相关API如下
 #### 属性:
 
 * `deivce`:推理设备;
-  * 支持设置 `str` 类型表示的推理设备类型及卡号,设备类型支持可选 'gpu', 'cpu', 'npu', 'xpu', 'mlu',当使用加速卡时,支持指定卡号,如使用 0 号 gpu:'gpu:0',默认为 'gpu:0'
+  * 支持设置 `str` 类型表示的推理设备类型及卡号,设备类型支持可选 “gpu”、“cpu”、“npu”、“xpu”、“mlu”、“dcu”,当使用加速卡时,支持指定卡号,如使用 0 号 GPU:`gpu:0`,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU
   * 返回值:`str`类型,当前设置的推理设备。
 * `run_mode`:推理后端;
   * 支持设置 `str` 类型的推理后端,支持可选 'paddle','trt_fp32','trt_fp16','trt_int8','mkldnn','mkldnn_bf16',其中 'mkldnn' 仅当推理设备使用 cpu 时可选,默认为 'paddle';
@@ -95,6 +112,7 @@ PaddleX 支持通过`PaddlePredictorOption`设置推理后端,相关API如下
   * 返回值:`int` 类型,当前设置的加速库计算线程数。
 
 #### 方法:
+
 * `get_support_run_mode`:获取支持的推理后端设置;
   * 参数:无;
   * 返回值:list 类型,可选的推理后端设置。

+ 16 - 12
docs/pipeline_usage/instructions/pipeline_CLI_usage.en.md

@@ -16,18 +16,20 @@ Taking the image classification pipeline as an example, the usage is as follows:
 paddlex --pipeline image_classification \
         --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
         --device gpu:0 \
-        --save_path ./output/
+        --save_path ./output/ \
+        --topk 5
 ```
 This single step completes the inference prediction and saves the prediction results. Explanations for the relevant parameters are as follows:
 
 * `pipeline`: The name of the pipeline or the local path to the pipeline configuration file, such as the pipeline name "image_classification", or the path to the pipeline configuration file "path/to/image_classification.yaml";
 * `input`: The path to the data file to be predicted, supporting local file paths, local directories containing data files to be predicted, and file URL links;
-* `device`: Used to set the model inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". When not specified, if GPU is available, it will be used; otherwise, CPU will be used;
-* `save_path`: The save path for prediction results. When not specified, the prediction results will not be saved;
+* `device`: Used to set the inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". By default, using 0 id GPU if available, otherwise CPU;
+* `save_path`: The save path for prediction results. By default, the prediction results will not be saved;
+* _`inference hyperparameters`_: Different pipelines support different inference hyperparameter settings. And the priority of this parameter is greater than the pipeline default configuration. Such as the image classification pipeline, it supports `topk` parameter. Please refer to the specific pipeline description document for details.
 
 ### 2. Custom Pipeline Configuration
 
-If you need to modify the pipeline configuration, you can retrieve the configuration file and modify it. Still taking the image classification pipeline as an example, the way to retrieve the configuration file is as follows:
+If you need to modify the pipeline, you can get the configuration file and modify it. Still taking the image classification pipeline as an example, the way to retrieve the configuration file is as follows:
 
 ```bash
 paddlex --get_pipeline_config image_classification
@@ -41,14 +43,16 @@ paddlex --get_pipeline_config image_classification
 After modifying the production line configuration file `configs/image_classification.yaml`, such as the content for the image classification configuration file:
 
 ```yaml
-Global:
-  pipeline_name: image_classification
-  input: https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg
-
-Pipeline:
-  model: PP-LCNet_x0_5
-  batch_size: 1
-  device: "gpu:0"
+pipeline_name: image_classification
+
+SubModules:
+  ImageClassification:
+    module_name: image_classification
+    model_name: PP-LCNet_x0_5
+    model_dir: null
+    batch_size: 4
+    device: "gpu:0"
+    topk: 5
 ```
 
 Once the modification is completed, you can use this configuration file to perform model pipeline inference prediction as follows:

+ 21 - 16
docs/pipeline_usage/instructions/pipeline_CLI_usage.md

@@ -2,9 +2,9 @@
 comments: true
 ---
 
-# PaddleX模型产线CLI命令行使用说明
+# PaddleX模型产线CLI命令行使用说明
 
-在使用CLI命令行进行模型产线快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
+在使用CLI命令行进行模型产线快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
 
 ## 一、使用示例
 
@@ -16,18 +16,21 @@ comments: true
 paddlex --pipeline image_classification \
         --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
         --device gpu:0 \
-        --save_path ./output/
+        --save_path ./output/ \
+        --topk 5
 ```
+
 只需一步就能完成推理预测并保存预测结果,相关参数说明如下:
 
-* `pipeline`:模型产线名称或是模型产线配置文件的本地路径,如模型产线名“image_classification”,或模型产线配置文件路径“path/to/image_classification.yaml”;
+* `pipeline`:模型产线名称或是模型产线配置文件的本地路径,如模型产线名 “image_classification”,或模型产线配置文件路径 “path/to/image_classification.yaml”;
 * `input`:待预测数据文件路径,支持本地文件路径、包含待预测数据文件的本地目录、文件URL链接;
-* `device`:用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”,当不传入时,如有GPU设置则使用GPU,否则使用CPU;
-* `save_path`:预测结果的保存路径,当不传入时,则不保存预测结果;
+* `device`:用于设置模型推理设备,如为 GPU 则可以指定卡号,如 “cpu”、“gpu:2”,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
+* `save_path`:预测结果的保存路径,默认情况下,不保存预测结果;
+* _`推理超参数`_:不同产线根据具体情况提供了不同的推理超参数设置,该参数优先级大于产线配置文件。对于图像分类产线,则支持通过 `topk` 参数设置输出的前 k 个预测结果。其他产线请参考对应的产线说明文档;
 
 ### 2. 自定义产线配置
 
-如需对产线配置进行修改,可获取配置文件后进行修改,仍以图像分类产线为例,获取配置文件方式如下:
+如需对产线进行修改,可获取产线配置文件后进行修改,仍以图像分类产线为例,获取配置文件方式如下:
 
 ```bash
 paddlex --get_pipeline_config image_classification
@@ -38,17 +41,19 @@ paddlex --get_pipeline_config image_classification
 # The pipeline config has been saved to: configs/image_classification.yaml
 ```
 
-然后可修改产线配置文件`configs/image_classification.yaml`,如图像分类配置文件内容为:
+然后可修改产线配置文件 `configs/image_classification.yaml`,如图像分类配置文件内容为:
 
 ```yaml
-Global:
-  pipeline_name: image_classification
-  input: https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg
-
-Pipeline:
-  model: PP-LCNet_x0_5
-  batch_size: 1
-  device: "gpu:0"
+pipeline_name: image_classification
+
+SubModules:
+  ImageClassification:
+    module_name: image_classification
+    model_name: PP-LCNet_x0_5
+    model_dir: null
+    batch_size: 4
+    device: "gpu:0"
+    topk: 5
 ```
 
 在修改完成后,即可使用该配置文件进行模型产线推理预测,方式如下:

+ 15 - 19
docs/pipeline_usage/instructions/pipeline_python_API.en.md

@@ -7,43 +7,48 @@ comments: true
 Before using Python scripts for rapid inference on model pipelines, please ensure you have installed PaddleX following the [PaddleX Local Installation Guide](../../installation/installation.en.md).
 
 ## I. Usage Example
+
 Taking the image classification pipeline as an example, the usage is as follows:
 
 ```python
 from paddlex import create_pipeline
 pipeline = create_pipeline("image_classification")
-output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
+output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1, topk=5)
 for res in output:
     res.print(json_format=False)
     res.save_to_img("./output/")
     res.save_to_json("./output/res.json")
 ```
+
 In short, there are only three steps:
 
 * Call the `create_pipeline()` method to instantiate the prediction model pipeline object;
 * Call the `predict()` method of the prediction model pipeline object for inference;
-* Call `print()`, `save_to_xxx()` and other related methods to visualize or save the prediction results.
+* Call `print()`, `save_to_xxx()` and other related methods to print or save the prediction results.
 
 ## II. API Description
 
 ### 1. Instantiate the Prediction Model Pipeline Object by Calling `create_pipeline()`
 * `create_pipeline`: Instantiates the prediction model pipeline object;
   * Parameters:
-    * `pipeline_name`: `str` type, the pipeline name or the local pipeline configuration file path, such as "image_classification", "/path/to/image_classification.yaml";
-    * `device`: `str` type, used to set the model inference device, such as "cpu" or "gpu:2" for GPU settings;
-    * `pp_option`: `PaddlePredictorOption` type, used to set the model inference backend;
+    * `pipeline`: `str` type, the pipeline name or the local pipeline configuration file path, such as "image_classification", "/path/to/image_classification.yaml";
+    * `device`: `str` type, used to set the inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". By default, using 0 id GPU if available, otherwise CPU;
+    * `pp_option`: `PaddlePredictorOption` type, used to set the inference engine. Please refer to [4-Inference Backend Configuration](#4-inference-backend-configuration) for more details;
   * Return Value: `BasePredictor` type.
 
 ### 2. Perform Inference by Calling the `predict()` Method of the Prediction Model Pipeline Object
+
 * `predict`: Uses the defined prediction model pipeline to predict input data;
   * Parameters:
     * `input`: Any type, supporting str representing the path of the file to be predicted, or a directory containing files to be predicted, or a network URL; for CV tasks, supports numpy.ndarray representing image data; for TS tasks, supports pandas.DataFrame type data; also supports lists of the above types;
   * Return Value: `generator`, returns the prediction result of one sample per call;
 
 ### 3. Visualize the Prediction Results
-The prediction results of the model pipeline support access, visualization, and saving, which can be achieved through corresponding attributes or methods, specifically as follows:
+
+The prediction results of the pipeline support to be accessed and saved, which can be achieved through corresponding attributes or methods, specifically as follows:
 
 #### Attributes:
+
 * `str`: `str` type representation of the prediction result;
   * Return Value: `str` type, string representation of the prediction result;
 * `json`: Prediction result in JSON format;
@@ -52,21 +57,10 @@ The prediction results of the model pipeline support access, visualization, and
   * Return Value: `PIL.Image` type;
 * `html`: HTML representation of the prediction result;
   * Return Value: `str` type;
-
-### 3. Visualize the Prediction Results
-The prediction results support to be accessed, visualized, and saved, which can be achieved through corresponding attributes or methods, specifically as follows:
-
-#### Attributes:
-* `str`: Representation of the prediction result in `str` type;
-  * Returns: A `str` type, the string representation of the prediction result.
-* `json`: The prediction result in JSON format;
-  * Returns: A `dict` type.
-* `img`: The visualization image of the prediction result;
-  * Returns: A `PIL.Image` type.
-* `html`: The HTML representation of the prediction result;
-  * Returns: A `str` type.
+* _`more attrs`_: The prediction result of different pipelines support different representation methods. Please refer to the specific pipeline tutorial documentation for details.
 
 #### Methods:
+
 * `print()`: Outputs the prediction result. Note that when the prediction result is not convenient for direct output, relevant content will be omitted;
   * Parameters:
     * `json_format`: `bool` type, default is `False`, indicating that json formatting is not used;
@@ -95,6 +89,7 @@ The prediction results support to be accessed, visualized, and saved, which can
   * Parameters:
     * `save_path`: `str` type, the path to save the result.
   * Returns: None.
+* _`more funcs`_: The prediction result of different pipelines support different saving methods. Please refer to the specific pipeline tutorial documentation for details.
 
 ### 4. Inference Backend Configuration
 
@@ -113,6 +108,7 @@ PaddleX supports configuring the inference backend through `PaddlePredictorOptio
   * Return value: `int` type, the currently set number of threads for the acceleration library.
 
 #### Methods:
+
 * `get_support_run_mode`: Get supported inference backend configurations;
   * Parameters: None;
   * Return value: List type, the available inference backend configurations.

+ 19 - 6
docs/pipeline_usage/instructions/pipeline_python_API.md

@@ -4,7 +4,7 @@ comments: true
 
 # PaddleX模型产线Python脚本使用说明
 
-在使用Python脚本进行模型产线快速推理前,请确保您已经按照[PaddleX本地安装教程](../../installation/installation.md)完成了PaddleX的安装。
+在使用 Python 脚本进行模型产线快速推理前,请确保您已经按照 [PaddleX 本地安装教程](../../installation/installation.md)完成了 PaddleX 的安装。
 
 ## 一、使用示例
 
@@ -13,35 +13,43 @@ comments: true
 ```python
 from paddlex import create_pipeline
 pipeline = create_pipeline("image_classification")
-output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1)
+output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1, topk=5)
 for res in output:
     res.print(json_format=False)
     res.save_to_img("./output/")
     res.save_to_json("./output/res.json")
 ```
+
 简单来说,只需三步:
 
 * 调用`create_pipeline()`方法实例化预测模型产线对象;
 * 调用预测模型产线对象的`predict()`方法进行推理预测;
-* 调用`print()`、`save_to_xxx()`等相关方法对预测结果进行可视化或是保存。
+* 调用`print()`、`save_to_xxx()`等相关方法对预测结果进行打印输出或是保存。
 
 ## 二、API说明
 
 ### 1. 调用`create_pipeline()`方法实例化预测模型产线对象
+
 * `create_pipeline`:实例化预测模型产线对象;
   * 参数:
-    * `pipeline_name`:`str` 类型,产线名或是本地产线配置文件路径,如“image_classification”、“/path/to/image_classification.yaml”;
-    * `device`:`str` 类型,用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”;
-    * `pp_option`:`PaddlePredictorOption` 类型,用于设置模型推理后端;
+    * `pipeline`:`str` 类型,产线名或是本地产线配置文件路径,如“image_classification”、“/path/to/image_classification.yaml”;
+    * `device`:`str` 类型,用于设置模型推理设备,如为 GPU 则可以指定卡号,如“cpu”、“gpu:2”,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU
+    * `pp_option`:`PaddlePredictorOption` 类型,用于设置模型推理后端,关于推理后端的详细说明,请参考下文[4-推理后端设置](#4-推理后端设置)
   * 返回值:`BasePredictor`类型。
+
 ### 2. 调用预测模型产线对象的`predict()`方法进行推理预测
+
 * `predict`:使用定义的预测模型产线,对输入数据进行预测;
   * 参数:
     * `input`:任意类型,支持str类型表示的待预测数据文件路径,或是包含待预测文件的目录,或是网络URL;对于CV任务,支持numpy.ndarray表示的图像数据;对于TS任务,支持pandas.DataFrame类型数据;同样支持上述类型所构成的list类型;
   * 返回值:`generator`,每次调用返回一个样本的预测结果;
+
 ### 3. 对预测结果进行可视化
+
 模型产线的预测结果支持访问、可视化及保存,可通过相应的属性或方法实现,具体如下:
+
 #### 属性:
+
 * `str`:`str` 类型表示的预测结果;
   * 返回值:`str` 类型,预测结果的str表示;
 * `json`:json格式表示的预测结果;
@@ -50,7 +58,10 @@ for res in output:
   * 返回值:`PIL.Image` 类型;
 * `html`:预测结果的HTML表示;
   * 返回值:`str` 类型;
+* _`更多`_:不同产线的预测结果支持不同的表示方式,更多属性请参考具体产线文档;
+
 #### 方法:
+
 * `print()`:将预测结果输出,需要注意,当预测结果不便于直接输出时,会省略相关内容;
   * 参数:
     * `json_format`:`bool`类型,默认为`False`,表示不使用json格式化输出;
@@ -79,6 +90,7 @@ for res in output:
   * 参数:
     * `save_path`:`str`类型,结果保存的路径;
   * 返回值:无;
+* _`更多`_:不同产线的预测结果支持不同的存储方式,更多方法请参考具体产线文档;
 
 ### 4. 推理后端设置
 
@@ -97,6 +109,7 @@ PaddleX 支持通过`PaddlePredictorOption`设置推理后端,相关API如下
   * 返回值:`int` 类型,当前设置的加速库计算线程数。
 
 #### 方法:
+
 * `get_support_run_mode`:获取支持的推理后端设置;
   * 参数:无;
   * 返回值:list 类型,可选的推理后端设置。