Browse Source

Fix parallel inference docs (#4052)

Lin Manhui 6 months ago
parent
commit
63e6e276f7
39 changed files with 39 additions and 39 deletions
  1. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md
  2. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.md
  3. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.en.md
  4. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md
  5. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.en.md
  6. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md
  7. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.en.md
  8. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md
  9. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md
  10. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md
  11. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.en.md
  12. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md
  13. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.en.md
  14. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.md
  15. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.en.md
  16. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.md
  17. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.en.md
  18. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md
  19. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.en.md
  20. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md
  21. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.en.md
  22. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.md
  23. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.en.md
  24. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md
  25. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.en.md
  26. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.md
  27. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.en.md
  28. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.md
  29. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md
  30. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md
  31. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md
  32. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md
  33. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md
  34. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md
  35. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md
  36. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md
  37. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md
  38. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.md
  39. 1 1
      mkdocs.yml

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md

@@ -152,7 +152,7 @@ paddlex --pipeline human_keypoint_detection \
         --device gpu:0
 ```
 
-The relevant parameter descriptions and results explanations can be referred to in the parameter explanations and results explanations of [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions and results explanations can be referred to in the parameter explanations and results explanations of [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 The visualization results are saved to `save_path`, as shown below:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.md

@@ -197,7 +197,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.en.md

@@ -86,7 +86,7 @@ Note: Due to network issues, the above URL could not be successfully parsed. If
 paddlex --pipeline anomaly_detection --input uad_grid.png --device gpu:0  --save_path ./output
 ```
 
-The relevant parameter descriptions can be found in the [2.1.2 Python Script Integration](#212-python脚本方式集成) section. Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be found in the [2.1.2 Python Script Integration](#212-python脚本方式集成) section. Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md

@@ -146,7 +146,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.en.md

@@ -751,7 +751,7 @@ You can quickly experience the image classification pipeline with a single comma
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0 --save_path ./output/
 ```
 
-The relevant parameter descriptions can be found in the parameter explanation section of [2.2.2 Python Script Integration](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be found in the parameter explanation section of [2.2.2 Python Script Integration](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 ```bash
 {'res': {'input_path': 'general_image_classification_001.jpg', 'page_index': None, 'class_ids': array([296, 170, 356, 258, 248], dtype=int32), 'scores': array([0.62736, 0.03752, 0.03256, 0.0323 , 0.03194], dtype=float32), 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}}

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md

@@ -805,7 +805,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.en.md

@@ -116,7 +116,7 @@ You can quickly experience the image multi-label classification pipeline effect
 paddlex --pipeline image_multilabel_classification --input general_image_classification_001.jpg --device gpu:0
 ```
 
-The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md

@@ -177,7 +177,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md

@@ -229,7 +229,7 @@ paddlex --pipeline instance_segmentation \
         --device gpu:0
 ```
 
-The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md

@@ -287,7 +287,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>None</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.en.md

@@ -445,7 +445,7 @@ paddlex --pipeline object_detection \
         --device gpu:0
 ```
 
-For the description of parameters and interpretation of results, please refer to the parameter explanation and result interpretation in [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+For the description of parameters and interpretation of results, please refer to the parameter explanation and result interpretation in [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 The visualization results are saved to `save_path`, as shown below:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md

@@ -497,7 +497,7 @@ for res in output:
 <td><code>None</code></td>
 </tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.en.md

@@ -134,7 +134,7 @@ You can quickly experience the pedestrian attribute recognition pipeline with a
 paddlex --pipeline pedestrian_attribute_recognition --input pedestrian_attribute_002.jpg --device gpu:0 --save_path ./output/
 ```
 
-The relevant parameter descriptions can be found in the parameter explanation section of [2.2.2 Python Script Integration](#222-python脚本方式集成). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be found in the parameter explanation section of [2.2.2 Python Script Integration](#222-python脚本方式集成). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal, as shown below:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.md

@@ -192,7 +192,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.en.md

@@ -95,7 +95,7 @@ paddlex --pipeline rotated_object_detection \
         --device gpu:0 \
 ```
 
-The relevant parameter descriptions can be referred to in the parameter explanations of [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referred to in the parameter explanations of [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal, as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.md

@@ -148,7 +148,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>None</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.en.md

@@ -262,7 +262,7 @@ paddlex --pipeline semantic_segmentation \
         --device gpu:0 \
 ```
 
-The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal, as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md

@@ -321,7 +321,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>None</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.en.md

@@ -113,7 +113,7 @@ paddlex --pipeline small_object_detection \
         --device gpu:0
 ```
 
-The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referred to in the parameter explanations in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md

@@ -169,7 +169,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>None</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.en.md

@@ -136,7 +136,7 @@ Parameter Description:
 {'res': {'input_path': 'vehicle_attribute_002.jpg', 'boxes': [{'labels': ['red(红色)', 'sedan(轿车)'], 'cls_scores': array([0.96375, 0.94025]), 'det_score': 0.9774094820022583, 'coordinate': [196.32553, 302.3847, 639.3131, 655.57904]}, {'labels': ['suv(SUV)', 'brown(棕色)'], 'cls_scores': array([0.99968, 0.99317]), 'det_score': 0.9705657958984375, 'coordinate': [769.4419, 278.8417, 1401.0217, 641.3569]}]}}
 ```
 
-For the explanation of the running result parameters, you can refer to the result interpretation in [Section 2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+For the explanation of the running result parameters, you can refer to the result interpretation in [Section 2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 The visualization results are saved under `save_path`, and the visualization result is as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.md

@@ -190,7 +190,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.en.md

@@ -487,7 +487,7 @@ paddlex --pipeline OCR \
         --device gpu:0
 ```
 
-For details on the relevant parameter descriptions, please refer to the parameter descriptions in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+For details on the relevant parameter descriptions, please refer to the parameter descriptions in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md

@@ -576,7 +576,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.en.md

@@ -651,7 +651,7 @@ paddlex --pipeline PP-StructureV3 \
         --device gpu:0
 ```
 
-The parameter description can be found in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The parameter description can be found in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal, as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.md

@@ -744,7 +744,7 @@ for item in markdown_images:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.en.md

@@ -125,7 +125,7 @@ paddlex --pipeline doc_preprocessor \
         --save_path ./output \
         --device gpu:0
 ```
-You can refer to the parameter descriptions in [2.1.2 Python Script Integration](#212-python-script-integration) for related parameter details. Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+You can refer to the parameter descriptions in [2.1.2 Python Script Integration](#212-python-script-integration) for related parameter details. Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.md

@@ -184,7 +184,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md

@@ -393,7 +393,7 @@ paddlex --pipeline formula_recognition \
         --device gpu:0
 ```
 
-The relevant parameter descriptions can be referenced from [2.2 Integration via Python Script](#22-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referenced from [2.2 Integration via Python Script](#22-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal, as shown below:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md

@@ -470,7 +470,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>None</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md

@@ -583,7 +583,7 @@ paddlex --pipeline layout_parsing \
         --save_path ./output \
         --device gpu:0
 ```
-For parameter descriptions, refer to the parameter explanations in [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+For parameter descriptions, refer to the parameter explanations in [2.2.2 Integration via Python Script](#222-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal, as shown below:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md

@@ -696,7 +696,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md

@@ -605,7 +605,7 @@ paddlex --pipeline seal_recognition \
     --save_path ./output
 ```
 
-The relevant parameter descriptions can be referred to in the parameter explanations of [2.1.2 Integration via Python Script](#212-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The relevant parameter descriptions can be referred to in the parameter explanations of [2.1.2 Integration via Python Script](#212-integration-via-python-script). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the results will be printed to the terminal, as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md

@@ -659,7 +659,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md

@@ -648,7 +648,7 @@ paddlex --pipeline table_recognition \
         --device gpu:0
 ```
 
-The content of the parameters can refer to the parameter description in [2.2 Python Script Method](#22-python-script-method-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The content of the parameters can refer to the parameter description in [2.2 Python Script Method](#22-python-script-method-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 After running, the result will be printed to the terminal, as follows:
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md

@@ -694,7 +694,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md

@@ -702,7 +702,7 @@ paddlex --pipeline table_recognition_v2 \
        [1046, ...,  573]], dtype=int16)}}]}}
 ```
 
-The explanation of the running result parameters can refer to the result interpretation in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
+The explanation of the running result parameters can refer to the result interpretation in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to the documentation on pipeline parallel inference.
 
 
 The visualization results are saved under `save_path`, where the visualization result of table recognition is as follows:

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.md

@@ -775,7 +775,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>device</code></td>
-<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考 <a href="../../instructions/parallel_inference.md#指定多个推理设备">产线并行推理</a>。</td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。支持同时指定多个设备以进行并行推理,详情请参考产线并行推理文档。</td>
 <td><code>str</code></td>
 <td><code>gpu:0</code></td>
 </tr>

+ 1 - 1
mkdocs.yml

@@ -367,7 +367,7 @@ nav:
        - 说明文件: 
          - PaddleX产线命令行使用说明: pipeline_usage/instructions/pipeline_CLI_usage.md
          - PaddleX产线Python脚本使用说明: pipeline_usage/instructions/pipeline_python_API.md
-         - 产线并行推理: pipline_usage/instructions/parallel_inference.md
+         - 产线并行推理: pipeline_usage/instructions/parallel_inference.md
   - 单功能模块使用教程:
        - OCR:
          - 文本检测模块: module_usage/tutorials/ocr_modules/text_detection.md