소스 검색

refine practical docs (#2219)

zhangyubo0722 1 년 전
부모
커밋
3230785625

+ 4 - 4
docs/practical_tutorials/image_classification_garbage_tutorial.md

@@ -154,14 +154,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练轮次数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -246,10 +246,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [图像分类产线使用教程](../pipeline_usage/tutorials/cv_pipelines/image_classification.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 3 - 3
docs/practical_tutorials/image_classification_garbage_tutorial_en.md

@@ -157,14 +157,14 @@ Each model in PaddleX provides a configuration file for model development to set
     * `epochs_iters`: Number of training epochs;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, please refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, please refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command line arguments, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first two GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training epochs to 10: `-o Train.epochs_iters=10`.
 - During model training, PaddleX automatically saves model weight files, with the default being `output`. If you need to specify a save path, you can use the `-o Global.output` field in the configuration file.
 - PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
 
-**Explanation of Training Outputs**:  
+**Explanation of Training Outputs**:
 
 After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
 
@@ -252,7 +252,7 @@ for res in output:
 ```
 For more parameters, please refer to the [General Image Classification Pipeline Usage Tutorial](../pipeline_usage/tutorials/cv_pipelines/image_classification_en.md).
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).

+ 4 - 4
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.md

@@ -151,14 +151,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练轮次数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -244,10 +244,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [实例分割产线使用教程](../pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 3 - 3
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial_en.md

@@ -152,14 +152,14 @@ Each model in PaddleX provides a configuration file for model development to set
     * `epochs_iters`: Number of training epochs;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, please refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, please refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command line arguments, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first 2 GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training epochs to 10: `-o Train.epochs_iters=10`.
 - During model training, PaddleX automatically saves model weight files, with the default being `output`. If you need to specify a save path, you can use the `-o Global.output` field in the configuration file.
 - PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced. During model inference, static graph weights are selected by default.
 
-**Explanation of Training Outputs**:  
+**Explanation of Training Outputs**:
 
 After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
 
@@ -252,7 +252,7 @@ for res in output:
 ```
 For more parameters, please refer to the [General Instance Segmentation Pipline User Guide](../pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md)。
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).

+ 5 - 5
docs/practical_tutorials/object_detection_fall_tutorial.md

@@ -93,7 +93,7 @@ python main.py -c paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml \
   "dataset_path": "./dataset/fall_det",
   "show_type": "image",
   "dataset_type": "COCODetDataset"
-}  
+}
 ```
 上述校验结果中,check_pass 为 True 表示数据集格式符合要求,其他部分指标的说明如下:
 
@@ -153,14 +153,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练轮次数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -245,10 +245,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [目标检测产线使用教程](../pipeline_usage/tutorials/cv_pipelines/object_detection.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 4 - 4
docs/practical_tutorials/object_detection_fall_tutorial_en.md

@@ -93,7 +93,7 @@ After executing the above command, PaddleX will verify the dataset and count its
   "dataset_path": "./dataset/fall_det",
   "show_type": "image",
   "dataset_type": "COCODetDataset"
-}  
+}
 ```
 The above verification results indicate that the `check_pass` being `True` means the dataset format meets the requirements. Explanations for other indicators are as follows:
 
@@ -154,14 +154,14 @@ Each model in PaddleX provides a configuration file for model development, which
     * `epochs_iters`: Number of training epochs;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command-line parameters, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first two GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training epochs to 10: `-o Train.epochs_iters=10`.
 - During model training, PaddleX automatically saves model weight files, with the default being `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 - PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
 
-**Explanation of Training Outputs**:  
+**Explanation of Training Outputs**:
 
 After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
 
@@ -250,7 +250,7 @@ for res in output:
 ```
 For more parameters, please refer to [General Object Detection Pipeline Usage Tutorial](../pipeline_usage/tutorials/cv_pipelines/object_detection_en.md).
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).

+ 5 - 5
docs/practical_tutorials/object_detection_fashion_pedia_tutorial.md

@@ -93,7 +93,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
   "dataset_path": "./dataset/det_mini_fashion_pedia_coco",
   "show_type": "image",
   "dataset_type": "COCODetDataset"
-}  
+}
 ```
 上述校验结果中,check_pass 为 True 表示数据集格式符合要求,其他部分指标的说明如下:
 
@@ -153,14 +153,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练轮次数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 50:`-o Train.epochs_iters=50`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -246,10 +246,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [目标检测产线使用教程](../pipeline_usage/tutorials/cv_pipelines/object_detection.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 4 - 4
docs/practical_tutorials/object_detection_fashion_pedia_tutorial_en.md

@@ -93,7 +93,7 @@ After executing the above command, PaddleX will verify the dataset and collect b
   "dataset_path": "./dataset/det_mini_fashion_pedia_coco",
   "show_type": "image",
   "dataset_type": "COCODetDataset"
-}  
+}
 ```
 The above verification results indicate that the dataset format meets the requirements as `check_pass` is True. The explanations for other indicators are as follows:
 
@@ -152,14 +152,14 @@ Each model in PaddleX provides a configuration file for model development to set
     * `epochs_iters`: Number of training epochs;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, please refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, please refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command line arguments, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first two GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training epochs to 50: `-o Train.epochs_iters=50`.
 - During model training, PaddleX automatically saves model weight files, with the default being `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 - PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
 
-**Training Output Explanation**:  
+**Training Output Explanation**:
 
 After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
 
@@ -251,7 +251,7 @@ for res in output:
 
 For more parameters, please refer to [General Object Detection Pipeline Usage Tutorial](../pipeline_usage/tutorials/cv_pipelines/object_detection_en.md).
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).

+ 4 - 4
docs/practical_tutorials/ocr_det_license_tutorial.md

@@ -158,14 +158,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练轮次数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -249,10 +249,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [OCR 产线使用教程](../pipeline_usage/tutorials/ocr_pipelies/OCR.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 3 - 3
docs/practical_tutorials/ocr_det_license_tutorial_en.md

@@ -160,14 +160,14 @@ Each model in PaddleX provides a configuration file for model development to set
     * `epochs_iters`: Number of training epochs;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, please refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, please refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command line arguments, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first 2 GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training epochs to 10: `-o Train.epochs_iters=10`.
 - During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 - PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced. During model inference, static graph weights are selected by default.
 
-**Training Output Explanation**:  
+**Training Output Explanation**:
 
 After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
@@ -254,7 +254,7 @@ for res in output:
 ```
 For more parameters, please refer to the [General OCR Pipeline Usage Tutorial](../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md).
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).

+ 4 - 4
docs/practical_tutorials/ocr_rec_chinese_tutorial.md

@@ -158,14 +158,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练轮次数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -251,10 +251,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [OCR 产线使用教程](../pipeline_usage/tutorials/ocr_pipelies/OCR.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 3 - 3
docs/practical_tutorials/ocr_rec_chinese_tutorial_en.md

@@ -161,14 +161,14 @@ Each model in PaddleX provides a configuration file for model development to set
     * `epochs_iters`: Number of training epochs;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command-line arguments, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first two GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training epochs to 10: `-o Train.epochs_iters=10`.
 - During model training, PaddleX automatically saves model weight files, with the default being `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 - PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
 
-**Training Output Explanation**:  
+**Training Output Explanation**:
 
 After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
 
@@ -257,7 +257,7 @@ for res in output:
 ```
 For more parameters, please refer to the [General OCR Pipeline Usage Tutorial](../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md).
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).

+ 5 - 5
docs/practical_tutorials/semantic_segmentation_road_tutorial.md

@@ -89,7 +89,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
   "dataset_path": "./dataset/semantic-segmentation-makassaridn-road-dataset",
   "show_type": "image",
   "dataset_type": "COCODetDataset"
-}  
+}
 ```
 上述校验结果中,check_pass 为 True 表示数据集格式符合要求,其他部分指标的说明如下:
 
@@ -149,14 +149,14 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
     * `epochs_iters`:训练迭代次数数设置;
     * `learning_rate`:训练学习率设置;
 
-更多超参数介绍,请参考 [PaddleX 超参数介绍](../module_usage/instructions/config_parameters_common.md)。
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
 
 **注:**
 - 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练迭代次数为 5000:`-o Train.epochs_iters=5000`。
 - 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
 - PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
 
-**训练产出解释:**  
+**训练产出解释:**
 
 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
@@ -242,10 +242,10 @@ for res in output:
     res.print() # 打印预测的结构化输出
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
-```  
+```
 更多参数请参考 [语义分割产线使用教程](../pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md)。
 
-2. 此外,PaddleX 也提供了服务化部署方式,详细说明如下:
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。

+ 3 - 3
docs/practical_tutorials/semantic_segmentation_road_tutorial_en.md

@@ -89,7 +89,7 @@ After executing the above command, PaddleX will verify the dataset and collect b
   "dataset_path": "./dataset/semantic-segmentation-makassaridn-road-dataset",
   "show_type": "image",
   "dataset_type": "COCODetDataset"
-}  
+}
 ```
 
 In the verification results above, `check_pass` being `True` indicates that the dataset format meets the requirements. Explanations for other indicators are as follows:
@@ -151,7 +151,7 @@ Each model in PaddleX provides a configuration file for model development, which
     * `epochs_iters`: Number of training iterations;
     * `learning_rate`: Training learning rate;
 
-For more hyperparameter introductions, refer to [PaddleX Hyperparameter Introduction](../module_usage/instructions/config_parameters_common_en.md).
+For more hyperparameter introductions, refer to [PaddleX General Model Configuration File Parameter Explanation](../module_usage/instructions/config_parameters_common_en.md).
 
 **Note**:
 - The above parameters can be set by appending command line arguments, e.g., specifying the mode as model training: `-o Global.mode=train`; specifying the first two GPUs for training: `-o Global.device=gpu:0,1`; setting the number of training iterations to 5000: `-o Train.epochs_iters=5000`.
@@ -248,7 +248,7 @@ for res in output:
 ```
 For more parameters, please refer to [General Semantic Segmentation Pipeline Usage Tutorial](../pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md).
 
-2. Additionally, PaddleX also offers service-oriented deployment methods, detailed as follows:
+2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * High-Performance Deployment: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../pipeline_deploy/high_performance_deploy_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).