Browse Source

Fix parameter descriptions in docs (#3872)

* Fix parameter descriptions in docs

* Add comment
Lin Manhui 7 months ago
parent
commit
2d2d52df50
100 changed files with 805 additions and 145 deletions
  1. 1 1
      docs/module_usage/instructions/benchmark.md
  2. 12 0
      docs/module_usage/instructions/config_parameters_3d.en.md
  3. 12 0
      docs/module_usage/instructions/config_parameters_3d.md
  4. 13 0
      docs/module_usage/instructions/config_parameters_common.en.md
  5. 13 0
      docs/module_usage/instructions/config_parameters_common.md
  6. 12 0
      docs/module_usage/instructions/config_parameters_time_series.en.md
  7. 12 0
      docs/module_usage/instructions/config_parameters_time_series.md
  8. 2 0
      docs/module_usage/instructions/model_python_API.en.md
  9. 2 0
      docs/module_usage/instructions/model_python_API.md
  10. 8 1
      docs/module_usage/tutorials/cv_modules/3d_bev_detection.en.md
  11. 8 1
      docs/module_usage/tutorials/cv_modules/3d_bev_detection.md
  12. 8 1
      docs/module_usage/tutorials/cv_modules/anomaly_detection.en.md
  13. 8 1
      docs/module_usage/tutorials/cv_modules/anomaly_detection.md
  14. 8 1
      docs/module_usage/tutorials/cv_modules/face_detection.en.md
  15. 8 1
      docs/module_usage/tutorials/cv_modules/face_detection.md
  16. 8 1
      docs/module_usage/tutorials/cv_modules/face_feature.en.md
  17. 8 1
      docs/module_usage/tutorials/cv_modules/face_feature.md
  18. 8 1
      docs/module_usage/tutorials/cv_modules/human_detection.en.md
  19. 8 1
      docs/module_usage/tutorials/cv_modules/human_detection.md
  20. 8 1
      docs/module_usage/tutorials/cv_modules/human_keypoint_detection.en.md
  21. 8 1
      docs/module_usage/tutorials/cv_modules/human_keypoint_detection.md
  22. 8 1
      docs/module_usage/tutorials/cv_modules/image_classification.en.md
  23. 8 1
      docs/module_usage/tutorials/cv_modules/image_classification.md
  24. 8 1
      docs/module_usage/tutorials/cv_modules/image_feature.en.md
  25. 8 1
      docs/module_usage/tutorials/cv_modules/image_feature.md
  26. 8 1
      docs/module_usage/tutorials/cv_modules/image_multilabel_classification.en.md
  27. 8 1
      docs/module_usage/tutorials/cv_modules/image_multilabel_classification.md
  28. 8 1
      docs/module_usage/tutorials/cv_modules/instance_segmentation.en.md
  29. 8 8
      docs/module_usage/tutorials/cv_modules/instance_segmentation.md
  30. 8 1
      docs/module_usage/tutorials/cv_modules/mainbody_detection.en.md
  31. 8 1
      docs/module_usage/tutorials/cv_modules/mainbody_detection.md
  32. 8 1
      docs/module_usage/tutorials/cv_modules/object_detection.en.md
  33. 8 1
      docs/module_usage/tutorials/cv_modules/object_detection.md
  34. 8 1
      docs/module_usage/tutorials/cv_modules/open_vocabulary_detection.en.md
  35. 8 1
      docs/module_usage/tutorials/cv_modules/open_vocabulary_detection.md
  36. 8 1
      docs/module_usage/tutorials/cv_modules/open_vocabulary_segmentation.en.md
  37. 8 1
      docs/module_usage/tutorials/cv_modules/open_vocabulary_segmentation.md
  38. 8 1
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.en.md
  39. 8 1
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md
  40. 8 1
      docs/module_usage/tutorials/cv_modules/rotated_object_detection.en.md
  41. 8 1
      docs/module_usage/tutorials/cv_modules/rotated_object_detection.md
  42. 8 1
      docs/module_usage/tutorials/cv_modules/semantic_segmentation.en.md
  43. 8 1
      docs/module_usage/tutorials/cv_modules/semantic_segmentation.md
  44. 8 1
      docs/module_usage/tutorials/cv_modules/small_object_detection.en.md
  45. 8 1
      docs/module_usage/tutorials/cv_modules/small_object_detection.md
  46. 8 1
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.en.md
  47. 8 1
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md
  48. 8 1
      docs/module_usage/tutorials/cv_modules/vehicle_detection.en.md
  49. 8 1
      docs/module_usage/tutorials/cv_modules/vehicle_detection.md
  50. 8 1
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.en.md
  51. 8 1
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md
  52. 8 1
      docs/module_usage/tutorials/ocr_modules/formula_recognition.en.md
  53. 8 1
      docs/module_usage/tutorials/ocr_modules/formula_recognition.md
  54. 8 1
      docs/module_usage/tutorials/ocr_modules/layout_detection.en.md
  55. 8 1
      docs/module_usage/tutorials/ocr_modules/layout_detection.md
  56. 8 1
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.en.md
  57. 8 1
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.md
  58. 8 1
      docs/module_usage/tutorials/ocr_modules/table_cells_detection.en.md
  59. 8 1
      docs/module_usage/tutorials/ocr_modules/table_cells_detection.md
  60. 8 1
      docs/module_usage/tutorials/ocr_modules/table_classification.en.md
  61. 8 1
      docs/module_usage/tutorials/ocr_modules/table_classification.md
  62. 8 1
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.en.md
  63. 8 1
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md
  64. 8 1
      docs/module_usage/tutorials/ocr_modules/text_detection.en.md
  65. 8 1
      docs/module_usage/tutorials/ocr_modules/text_detection.md
  66. 8 1
      docs/module_usage/tutorials/ocr_modules/text_image_unwarping.en.md
  67. 8 1
      docs/module_usage/tutorials/ocr_modules/text_image_unwarping.md
  68. 8 1
      docs/module_usage/tutorials/ocr_modules/text_recognition.en.md
  69. 8 1
      docs/module_usage/tutorials/ocr_modules/text_recognition.md
  70. 8 1
      docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.en.md
  71. 8 1
      docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.md
  72. 8 1
      docs/module_usage/tutorials/speech_modules/multilingual_speech_recognition.en.md
  73. 8 1
      docs/module_usage/tutorials/speech_modules/multilingual_speech_recognition.md
  74. 8 1
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.en.md
  75. 8 1
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md
  76. 8 1
      docs/module_usage/tutorials/time_series_modules/time_series_classification.en.md
  77. 8 1
      docs/module_usage/tutorials/time_series_modules/time_series_classification.md
  78. 8 1
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.en.md
  79. 8 1
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md
  80. 8 1
      docs/module_usage/tutorials/video_modules/video_classification.en.md
  81. 8 1
      docs/module_usage/tutorials/video_modules/video_classification.md
  82. 8 1
      docs/module_usage/tutorials/video_modules/video_detection.en.md
  83. 8 1
      docs/module_usage/tutorials/video_modules/video_detection.md
  84. 9 2
      docs/module_usage/tutorials/vlm_modules/doc_vlm.en.md
  85. 9 2
      docs/module_usage/tutorials/vlm_modules/doc_vlm.md
  86. 12 19
      docs/pipeline_deploy/high_performance_inference.en.md
  87. 10 18
      docs/pipeline_deploy/high_performance_inference.md
  88. 6 2
      docs/pipeline_deploy/serving.en.md
  89. 6 2
      docs/pipeline_deploy/serving.md
  90. 2 0
      docs/pipeline_usage/instructions/pipeline_CLI_usage.en.md
  91. 3 1
      docs/pipeline_usage/instructions/pipeline_CLI_usage.md
  92. 2 0
      docs/pipeline_usage/instructions/pipeline_python_API.en.md
  93. 2 0
      docs/pipeline_usage/instructions/pipeline_python_API.md
  94. 10 2
      docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.en.md
  95. 11 3
      docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.md
  96. 10 2
      docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.en.md
  97. 11 3
      docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.md
  98. 10 2
      docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.en.md
  99. 11 3
      docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.md
  100. 10 2
      docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md

+ 1 - 1
docs/module_usage/instructions/benchmark.md

@@ -19,7 +19,7 @@ Benchmark 功能会统计模型在端到端推理过程中,所有操作的每
 * `PADDLE_PDX_INFER_BENCHMARK_ITERS`:测试的循环次数,默认为 `0`;
 * `PADDLE_PDX_INFER_BENCHMARK_OUTPUT_DIR`:保存指标的目录,如 `./benchmark`,默认为 `None`,表示不保存 benchmark 指标;
 * `PADDLE_PDX_INFER_BENCHMARK_USE_CACHE_FOR_READ`:设置为 `True` 时则对读取输入数据操作应用缓存机制,避免重复I/O开销,并且数据读取及缓存消耗的时间不记录到核心耗时中。默认为 `False`;
-* `PADDLE_PDX_INFER_BENCHMARK_USE_NEW_INFER_API`:设置为 `True` 时则使用新的推理API,可以看更细致的分阶段结果。默认为 `False`
+* `PADDLE_PDX_INFER_BENCHMARK_USE_NEW_INFER_API`:设置为 `True` 时则使用新的推理API,可以看更细致的分阶段结果。默认为 `False`
 
 **注意**:
 

+ 12 - 0
docs/module_usage/instructions/config_parameters_3d.en.md

@@ -223,5 +223,17 @@ comments: true
 <td>Inference engine setting, such as: "paddle"</td>
 <td>paddle</td>
 </tr>
+<tr>
+<td>use_hpip</td>
+<td>bool</td>
+<td>Whether to enable the high-performance inference plugin</td>
+<td></td>
+</tr>
+<tr>
+<td>hpip_config</td>
+<td>dict | None</td>
+<td>High-performance inference configuration</td>
+<td></td>
+</tr>
 </tbody>
 </table>

+ 12 - 0
docs/module_usage/instructions/config_parameters_3d.md

@@ -219,5 +219,17 @@ comments: true
 <td>推理引擎设置,如: "paddle"</td>
 <td>paddle</td>
 </tr>
+<tr>
+<td>use_hpip</td>
+<td>bool</td>
+<td>是否启用高性能推理插件</td>
+<td></td>
+</tr>
+<tr>
+<td>hpip_config</td>
+<td>dict | None</td>
+<td>高性能推理配置</td>
+<td></td>
+</tr>
 </tbody>
 </table>

+ 13 - 0
docs/module_usage/instructions/config_parameters_common.en.md

@@ -267,5 +267,18 @@ comments: true
 <td>Inference engine setting, such as: "run_mode: paddle"</td>
 </tr>
 
+<tr>
+<td>use_hpip</td>
+<td>bool</td>
+<td>Whether to enable the high-performance inference plugin</td>
+<td></td>
+</tr>
+<tr>
+<td>hpip_config</td>
+<td>dict | None</td>
+<td>High-performance inference configuration</td>
+<td></td>
+</tr>
+
 </tbody>
 </table>

+ 13 - 0
docs/module_usage/instructions/config_parameters_common.md

@@ -261,5 +261,18 @@ comments: true
 <td></td>
 </tr>
 
+<tr>
+<td>use_hpip</td>
+<td>bool</td>
+<td>是否启用高性能推理插件</td>
+<td></td>
+</tr>
+<tr>
+<td>hpip_config</td>
+<td>dict | None</td>
+<td>高性能推理配置</td>
+<td></td>
+</tr>
+
 </tbody>
 </table>

+ 12 - 0
docs/module_usage/instructions/config_parameters_time_series.en.md

@@ -308,5 +308,17 @@ comments: true
 <td>Path to the prediction input</td>
 <td>The prediction input path specified in the YAML file</td>
 </tr>
+<tr>
+<td>use_hpip</td>
+<td>bool</td>
+<td>Whether to enable the high-performance inference plugin</td>
+<td></td>
+</tr>
+<tr>
+<td>hpip_config</td>
+<td>dict | None</td>
+<td>High-performance inference configuration</td>
+<td></td>
+</tr>
 </tbody>
 </table>

+ 12 - 0
docs/module_usage/instructions/config_parameters_time_series.md

@@ -306,5 +306,17 @@ comments: true
 <td>预测输入路径</td>
 <td>yaml文件中指定的预测输入路径</td>
 </tr>
+<tr>
+<td>use_hpip</td>
+<td>bool</td>
+<td>是否启用高性能推理插件</td>
+<td></td>
+</tr>
+<tr>
+<td>hpip_config</td>
+<td>dict | None</td>
+<td>高性能推理配置</td>
+<td></td>
+</tr>
 </tbody>
 </table>

+ 2 - 0
docs/module_usage/instructions/model_python_API.en.md

@@ -36,6 +36,8 @@ In short, just three steps:
     * `batch_size`: `int` type, default to `1`;
     * `device`: `str` type, used to set the inference device, such as "cpu", "gpu:2" for GPU settings. By default, using 0 id GPU if available, otherwise CPU;
     * `pp_option`: `PaddlePredictorOption` type, used to change inference settings (e.g. the operating mode). Please refer to [4-Inference Configuration](#4-inference-configuration) for more details;
+    * `use_hpip`:`bool` type, whether to enable the high-performance inference plugin;
+    * `hpi_config`:`dict | None` type, high-performance inference configuration;
     * _`inference hyperparameters`_: used to set common inference hyperparameters. Please refer to specific model description document for details.
   * Return Value: `BasePredictor` type.
 

+ 2 - 0
docs/module_usage/instructions/model_python_API.md

@@ -37,6 +37,8 @@ for res in output:
     * `batch_size`:`int` 类型,默认为 `1`;
     * `device`:`str` 类型,用于设置模型推理设备,如为GPU设置则可以指定卡号,如“cpu”、“gpu:2”,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
     * `pp_option`:`PaddlePredictorOption` 类型,用于改变运行模式等配置项,关于推理配置的详细说明,请参考下文[4-推理配置](#4-推理配置);
+    * `use_hpip`:`bool` 类型,是否启用高性能推理插件;
+    * `hpi_config`:`dict | None` 类型,高性能推理配置;
     * _`推理超参数`_:支持常见推理超参数的修改,具体参数说明详见具体模型文档;
   * 返回值:`BasePredictor` 类型。
 

+ 8 - 1
docs/module_usage/tutorials/cv_modules/3d_bev_detection.en.md

@@ -190,11 +190,18 @@ The following is an explanation of relevant methods and parameters:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/3d_bev_detection.md

@@ -198,11 +198,18 @@ python paddlex/inference/models/3d_bev_detection/visualizer_3d.py --save_path=".
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/anomaly_detection.en.md

@@ -137,11 +137,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/anomaly_detection.md

@@ -142,11 +142,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/face_detection.en.md

@@ -198,11 +198,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/face_detection.md

@@ -193,11 +193,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/face_feature.en.md

@@ -158,11 +158,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/face_feature.md

@@ -183,11 +183,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/human_detection.en.md

@@ -162,11 +162,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/human_detection.md

@@ -157,11 +157,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/human_keypoint_detection.en.md

@@ -166,11 +166,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/human_keypoint_detection.md

@@ -174,12 +174,19 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>flip</code></td>
 <td>是否进行图像水平反转推理结果融合; 如果为True,模型会对输入图像水平翻转后再次推理,并融合两次推理结果以增加关键点预测的准确性</td>
 <td><code>bool</code></td>

+ 8 - 1
docs/module_usage/tutorials/cv_modules/image_classification.en.md

@@ -793,11 +793,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/image_classification.md

@@ -796,11 +796,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/image_feature.en.md

@@ -148,11 +148,18 @@ Descriptions of related methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. When `model_name` is specified, PaddleX's built-in model parameters are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/image_feature.md

@@ -147,11 +147,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/image_multilabel_classification.en.md

@@ -180,11 +180,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/image_multilabel_classification.md

@@ -188,11 +188,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/instance_segmentation.en.md

@@ -285,11 +285,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 8
docs/module_usage/tutorials/cv_modules/instance_segmentation.md

@@ -274,12 +274,19 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
 <td><code>float/None</code></td>
@@ -337,13 +344,6 @@ for res in output:
 </td>
 <td>None</td>
 </tr>
-<tr>
-<td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
-<td><code>bool</code></td>
-<td>无</td>
-<td><code>False</code></td>
-</tr>
 </table>
 
 * 对预测结果进行处理,每个样本的预测结果均为对应的Result对象,且支持打印、保存为图片、保存为`json`文件的操作:

+ 8 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection.en.md

@@ -154,11 +154,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -150,11 +150,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/object_detection.en.md

@@ -486,11 +486,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/object_detection.md

@@ -509,11 +509,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/open_vocabulary_detection.en.md

@@ -161,11 +161,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the model parameters built into PaddleX will be used by default. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/open_vocabulary_detection.md

@@ -152,12 +152,19 @@ for res in results:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>thresholds</code></td>
 <td>模型使用的过滤阈值</td>
 <td><code>dict/None</code></td>

+ 8 - 1
docs/module_usage/tutorials/cv_modules/open_vocabulary_segmentation.en.md

@@ -154,11 +154,18 @@ Related methods and parameter explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the model parameters built into PaddleX will be used by default. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/open_vocabulary_segmentation.md

@@ -153,11 +153,18 @@ for res in results:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.en.md

@@ -174,11 +174,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, PaddleX's built-in model parameters are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md

@@ -174,11 +174,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/rotated_object_detection.en.md

@@ -143,12 +143,19 @@ Related methods and parameter explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>threshold</code></td>
 <td>The threshold for filtering low-score objects</td>
 <td><code>float/None/dict</code></td>

+ 8 - 1
docs/module_usage/tutorials/cv_modules/rotated_object_detection.md

@@ -138,12 +138,19 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
 <td><code>float/None/dict[int, float]</code></td>

+ 8 - 1
docs/module_usage/tutorials/cv_modules/semantic_segmentation.en.md

@@ -328,11 +328,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the built-in model parameters of PaddleX are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/semantic_segmentation.md

@@ -324,11 +324,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/small_object_detection.en.md

@@ -172,11 +172,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/small_object_detection.md

@@ -166,11 +166,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.en.md

@@ -156,11 +156,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, PaddleX's built-in model parameters are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md

@@ -155,11 +155,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/cv_modules/vehicle_detection.en.md

@@ -157,11 +157,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the built-in model parameters of PaddleX are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/cv_modules/vehicle_detection.md

@@ -154,11 +154,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.en.md

@@ -145,11 +145,18 @@ Related methods, parameters, and other explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md

@@ -144,11 +144,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/formula_recognition.en.md

@@ -167,11 +167,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/formula_recognition.md

@@ -161,11 +161,18 @@ sudo apt-get install texlive texlive-latex-base texlive-latex-extra -y
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/layout_detection.en.md

@@ -371,11 +371,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * Note that `model_name` must be specified. After specifying `model_name`, the default PaddleX built-in model parameters will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/layout_detection.md

@@ -372,11 +372,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/seal_text_detection.en.md

@@ -226,11 +226,18 @@ The explanations of related methods and parameters are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the built-in model parameters of PaddleX will be used by default. On this basis, if `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/seal_text_detection.md

@@ -225,11 +225,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/table_cells_detection.en.md

@@ -151,12 +151,19 @@ The following is the explanation of the methods, parameters, etc.:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>img_size</code></td>
 <td>Size of the input image; if not specified, the default configuration of the PaddleX official model will be used</td>
 <td><code>int/list</code></td>

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/table_cells_detection.md

@@ -150,12 +150,19 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>img_size</code></td>
 <td>输入图像大小;如果不指定,将默认使用PaddleX官方模型配置</td>
 <td><code>int/list</code></td>

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/table_classification.en.md

@@ -134,11 +134,18 @@ The descriptions of the related methods and parameters are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying the `model_name`, the default model parameters in PaddleX will be used. On this basis, if `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/table_classification.md

@@ -136,11 +136,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.en.md

@@ -162,11 +162,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * <code>model_name</code> must be specified. After specifying <code>model_name</code>, the default model parameters from PaddleX will be used. If <code>model_dir</code> is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md

@@ -157,11 +157,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/text_detection.en.md

@@ -207,11 +207,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/text_detection.md

@@ -223,11 +223,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/text_image_unwarping.en.md

@@ -142,11 +142,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/text_image_unwarping.md

@@ -138,11 +138,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/text_recognition.en.md

@@ -382,11 +382,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/text_recognition.md

@@ -405,11 +405,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.en.md

@@ -144,11 +144,18 @@ The explanations for the methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.md

@@ -146,11 +146,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/speech_modules/multilingual_speech_recognition.en.md

@@ -75,11 +75,18 @@ Related methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin. Not supported for now.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration. Not supported for now.</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, the built-in model parameters of PaddleX are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/speech_modules/multilingual_speech_recognition.md

@@ -124,11 +124,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件。目前暂不支持。</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置。目前暂不支持。</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.en.md

@@ -174,11 +174,18 @@ Relevant methods, parameters, and explanations are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * Note that `model_name` must be specified. After specifying `model_name`, the default PaddleX built-in model parameters will be used. If `model_dir` is specified, the user-defined model will be used.

+ 8 - 1
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md

@@ -178,11 +178,18 @@ timestamp
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/time_series_modules/time_series_classification.en.md

@@ -142,11 +142,18 @@ Descriptions of related methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, PaddleX's built-in model parameters are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/time_series_modules/time_series_classification.md

@@ -145,11 +145,18 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.en.md

@@ -177,11 +177,18 @@ Descriptions of related methods, parameters, etc., are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * The `model_name` must be specified. After specifying `model_name`, PaddleX's built-in model parameters are used by default. If `model_dir` is specified, the user-defined model is used.

+ 8 - 1
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md

@@ -192,11 +192,18 @@ date
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 8 - 1
docs/module_usage/tutorials/video_modules/video_classification.en.md

@@ -152,12 +152,19 @@ The Python script above performs the following steps:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code> topk</code></td>
 <td>The top <code> topk</code> categories and corresponding classification probabilities of the prediction result;if not specified, the default configuration of the PaddleX official model will be used</td>
 <td><code>int</code></td>

+ 8 - 1
docs/module_usage/tutorials/video_modules/video_classification.md

@@ -151,12 +151,19 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code> topk</code></td>
 <td>预测结果的前 <code>topk</code> 个类别和对应的分类概率;如果不指定,将默认使用PaddleX官方模型配置</td>
 <td><code>int</code></td>

+ 8 - 1
docs/module_usage/tutorials/video_modules/video_detection.en.md

@@ -142,12 +142,19 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. </td>
+<td>Whether to enable the high-performance inference plugin</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code>nms_thresh</code></td>
 <td>The IoU threshold parameter in the Non-Maximum Suppression (NMS) process; if not specified, the default configuration of the PaddleX official model will be used</td>
 <td><code>float/None</code></td>

+ 8 - 1
docs/module_usage/tutorials/video_modules/video_detection.md

@@ -145,12 +145,19 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件</td>
 <td><code>bool</code></td>
 <td>无</td>
 <td><code>False</code></td>
 </tr>
 <tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
 <td><code> nms_thresh</code></td>
 <td>非极大值抑制(Non-Maximum Suppression, NMS)过程中的IoU阈值参数;如果不指定,将默认使用PaddleX官方模型配置</td>
 <td><code>float/None</code></td>

+ 9 - 2
docs/module_usage/tutorials/vlm_modules/doc_vlm.en.md

@@ -110,11 +110,18 @@ The explanation of related methods and parameters are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Enable high-performance inference</td>
+<td>Whether to enable the high-performance inference plugin. Not supported for now.</td>
 <td><code>bool</code></td>
-<td>None, currently not supported</td>
+<td>None</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration. Not supported for now.</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * `model_name` must be specified. After specifying it, the default model parameters built into PaddleX are used, and if `model_dir` is specified, the user-defined model is used.

+ 9 - 2
docs/module_usage/tutorials/vlm_modules/doc_vlm.md

@@ -114,11 +114,18 @@ for res in results:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理</td>
+<td>是否启用高性能推理插件。目前暂不支持。</td>
 <td><code>bool</code></td>
-<td>无, 目前暂不支持</td>
+<td>无</td>
 <td><code>False</code></td>
 </tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置。目前暂不支持。</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
 </table>
 
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

+ 12 - 19
docs/pipeline_deploy/high_performance_inference.en.md

@@ -8,19 +8,19 @@ In real production environments, many applications impose strict performance met
 
 ## Table of Contents
 
-- [1. Basic Usage](#1.-Basic-Usage)
+- [1. Installation and Basic Usage](#1.-Installation-and-Basic-Usage)
   - [1.1 Installing the High-Performance Inference Plugin](#1.1-Installing-the-High-Performance-Inference-Plugin)
   - [1.2 Enabling the High-Performance Inference Plugin](#1.2-Enabling-the-High-Performance-Inference-Plugin)
 - [2. Advanced Usage](#2-Advanced-Usage)
   - [2.1 Working Modes of High-Performance Inference](#21-Working-Modes-of-High-Performance-Inference)
   - [2.2 High-Performance Inference Configuration](#22-High-Performance-Inference-Configuration)
   - [2.3 Modifying the High-Performance Inference Configuration](#23-Modifying-the-High-Performance-Inference-Configuration)
-  - [2.4 Enabling/Disabling the High-Performance Inference Plugin on Sub-pipelines/Submodules](#24-EnablingDisabling-the-High-Performance-Inference-Plugin-on-Sub-pipelinesSubmodules)
+  - [2.4 Enabling/Disabling the High‑Performance Inference Plugin in Configuration Files](#24-EnablingDisabling-the-High‑Performance-Inference-Plugin-in-Configuration-Files)
   - [2.5 Model Cache Description](#25-Model-Cache-Description)
   - [2.6 Customizing the Model Inference Library](#26-Customizing-the-Model-Inference-Library)
 - [3. Frequently Asked Questions](#3-Frequently-Asked-Questions)
 
-## 1. Basic Usage
+## 1. Installation and Basic Usage
 
 Before using the high-performance inference plugin, please ensure that you have completed the PaddleX installation according to the [PaddleX Local Installation Tutorial](../installation/installation.en.md) and have run the quick inference using the PaddleX pipeline command line or the PaddleX pipeline Python script as described in the usage instructions.
 
@@ -148,7 +148,6 @@ For the PaddleX CLI, specify `--use_hpip` to enable the high-performance inferen
 paddlex \
     --pipeline image_classification \
     --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    --device gpu:0 \
     --use_hpip
 ```
 
@@ -160,7 +159,6 @@ python main.py \
     -o Global.mode=predict \
     -o Predict.model_dir=None \
     -o Predict.input=https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    -o Global.device=gpu:0 \
     -o Predict.use_hpip=True
 ```
 
@@ -173,7 +171,6 @@ from paddlex import create_pipeline
 
 pipeline = create_pipeline(
     pipeline="image_classification",
-    device="gpu",
     use_hpip=True
 )
 
@@ -187,7 +184,6 @@ from paddlex import create_model
 
 model = create_model(
     model_name="ResNet18",
-    device="gpu",
     use_hpip=True
 )
 
@@ -196,7 +192,8 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
 
 The inference results obtained with the high-performance inference plugin enabled are identical to those without the plugin. For some models, **the first time the high-performance inference plugin is enabled, it may take a longer time to complete the construction of the inference engine**. PaddleX caches the related information in the model directory after the inference engine is built for the first time, and subsequently reuses the cached content to improve the initialization speed.
 
-**By default, enabling the high-performance inference plugin applies to the entire pipeline/module.** If you want to control the scope in a more granular way (e.g., enabling the high-performance inference plugin for only a sub-pipeline or a submodule), you can set the `use_hpip` parameter at different configuration levels in the pipeline configuration file. Please refer to [2.4 Enabling/Disabling the High-Performance Inference Plugin on Sub-pipelines/Submodules](#24-EnablingDisabling-the-High-Performance-Inference-Plugin-on-Sub-pipelinesSubmodules) for more details.
+**Enabling the high‑performance inference plugin via the PaddleX CLI and Python API applies by default to the entire pipeline/module.**
+If you need finer‑grained control—e.g. to enable the plugin only on a specific sub‑pipeline or sub‑module within your pipeline—you can set `use_hpip` in the configuration file at the appropriate level. Please refer to [2.4 Enabling/Disabling the High‑Performance Inference Plugin in Configuration Files](#24-EnablingDisabling-the-High‑Performance-Inference-Plugin-in-Configuration-Files). If `use_hpip` is not specified in the CLI options, API calls, or any configuration file, the high‑performance inference plugin will remain disabled by default.
 
 ## 2. Advanced Usage
 
@@ -344,7 +341,6 @@ hpi_config:
 paddlex \
     --pipeline image_classification \
     --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    --device gpu:0 \
     --use_hpip \
     --hpi_config '{"backend": "onnxruntime"}'
 ```
@@ -357,7 +353,6 @@ from paddlex import create_pipeline
 
 pipeline = create_pipeline(
     pipeline="OCR",
-    device="gpu",
     use_hpip=True,
     hpi_config={"backend": "onnxruntime"}
 )
@@ -385,7 +380,6 @@ python main.py \
     -o Global.mode=predict \
     -o Predict.model_dir=None \
     -o Predict.input=https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    -o Global.device=gpu:0 \
     -o Predict.use_hpip=True \
     -o Predict.hpi_config='{"backend": "onnxruntime"}'
 ```
@@ -398,7 +392,6 @@ from paddlex import create_model
 
 model = create_model(
     model_name="ResNet18",
-    device="gpu",
     use_hpip=True,
     hpi_config={"backend": "onnxruntime"}
 )
@@ -463,9 +456,9 @@ Predict:
 
 </details>
 
-### 2.4 Enabling/Disabling the High-Performance Inference Plugin on Sub-pipelines/Submodules
+### 2.4 Enabling/Disabling the High‑Performance Inference Plugin in Configuration Files
 
-High-performance inference supports enabling the high-performance inference plugin for only specific sub-pipelines/submodules by configuring `use_hpip` at the sub-pipeline or submodule level. For example:
+In the configuration file, you can use `use_hpip` to control whether the high-performance inference plugin is enabled or disabled. Unlike configuring via the CLI or API, this approach allows you to specify `use_hpip` at the sub-pipeline or sub-module level, enabling **high-performance inference only for a specific sub-pipeline or sub-module within the entire pipeline**. For example:
 
 **In the general OCR pipeline, enable high-performance inference for the `text_detection` module, but not for the `text_recognition` module:**
 
@@ -475,21 +468,21 @@ High-performance inference supports enabling the high-performance inference plug
 SubModules:
   TextDetection:
     ...
-    use_hpip: True # This submodule uses high-performance inference
+    use_hpip: True # This sub-module uses high-performance inference
   TextLineOrientation:
     ...
-    # This submodule does not have a specific configuration; it defaults to the global configuration
+    # This sub-module does not have a specific configuration; it defaults to the global configuration
     # (if neither the configuration file nor CLI/API parameters set it, high-performance inference will not be used)
   TextRecognition:
     ...
-    use_hpip: False # This submodule does not use high-performance inference
+    use_hpip: False # This sub-module does not use high-performance inference
 ```
 
 </details>
 
 **Note:**
 
-1. When setting `use_hpip` in sub-pipelines or submodules, the configuration at the deepest level will take precedence.
+1. When `use_hpip` is set at multiple levels in the configuration file, the setting at the deepest level takes precedence.
 2. **When enabling or disabling the high-performance inference plugin by modifying the pipeline configuration file, it is not recommended to also configure it using the CLI or Python API.** Setting `use_hpip` through the CLI or Python API is equivalent to modifying the top-level `use_hpip` in the configuration file.
 
 ### 2.5 Model Cache Description
@@ -543,7 +536,7 @@ If you need to customize the build of `ultra-infer`, you can modify the followin
 
 Example:
 
-```shell
+```bash
 # Build
 cd PaddleX/libs/ultra-infer/scripts/linux
 # export PYTHON_VERSION=...

+ 10 - 18
docs/pipeline_deploy/high_performance_inference.md

@@ -8,19 +8,19 @@ comments: true
 
 ## 目录
 
-- [1. 基础使用方法](#1.-基础使用方法)
+- [1. 安装与基础使用方法](#1.-安装与基础使用方法)
   - [1.1 安装高性能推理插件](#1.1-安装高性能推理插件)
   - [1.2 启用高性能推理插件](#1.2-启用高性能推理插件)
 - [2. 进阶使用方法](#2-进阶使用方法)
   - [2.1 高性能推理工作模式](#21-高性能推理工作模式)
   - [2.2 高性能推理配置](#22-高性能推理配置)
   - [2.3 修改高性能推理配置](#23-修改高性能推理配置)
-  - [2.4 高性能推理插件在子产线/子模块中的启用/禁用](#24-高性能推理插件在子产线子模块中的启用禁用)
+  - [2.4 在配置文件中启用/禁用高性能推理插件](#24-在配置文件中启用禁用高性能推理插件)
   - [2.5 模型缓存说明](#25-模型缓存说明)
   - [2.6 定制模型推理库](#26-定制模型推理库)
 - [3. 常见问题](#3.-常见问题)
 
-## 1. 基础使用方法
+## 1. 安装与基础使用方法
 
 使用高性能推理插件前,请确保您已经按照 [PaddleX本地安装教程](../installation/installation.md) 完成了PaddleX的安装,且按照PaddleX产线命令行使用说明或PaddleX产线Python脚本使用说明跑通了产线的快速推理。
 
@@ -150,7 +150,6 @@ paddlex --install hpi-gpu
 paddlex \
     --pipeline image_classification \
     --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    --device gpu:0 \
     --use_hpip
 ```
 
@@ -162,7 +161,6 @@ python main.py \
     -o Global.mode=predict \
     -o Predict.model_dir=None \
     -o Predict.input=https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    -o Global.device=gpu:0 \
     -o Predict.use_hpip=True
 ```
 
@@ -175,7 +173,6 @@ from paddlex import create_pipeline
 
 pipeline = create_pipeline(
     pipeline="image_classification",
-    device="gpu",
     use_hpip=True
 )
 
@@ -189,7 +186,6 @@ from paddlex import create_model
 
 model = create_model(
     model_name="ResNet18",
-    device="gpu",
     use_hpip=True
 )
 
@@ -198,11 +194,11 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
 
 启用高性能推理插件得到的推理结果与未启用插件时一致。对于部分模型,**在首次启用高性能推理插件时,可能需要花费较长时间完成推理引擎的构建**。PaddleX 将在推理引擎的第一次构建完成后将相关信息缓存在模型目录,并在后续复用缓存中的内容以提升初始化速度。
 
-**启用高性能推理插件默认作用于整条产线/整个模块**,若想细粒度控制作用范围,如只对产线中某条子产线或某个子模块启用高性能推理插件,可以在产线配置文件中不同层级的配置里设置`use_hpip`,请参考 [2.4 高性能推理插件在子产线/子模块中的启用/禁用](#24-高性能推理插件在子产线子模块中的启用禁用)
+**通过 PaddleX CLI 和 Python API 启用高性能推理插件默认作用于整条产线/模块**,若想细粒度控制作用范围,如只对产线中某条子产线或某个子模块启用高性能推理插件,可以在产线配置文件中不同层级的配置里设置 `use_hpip`,请参考 [2.4 在配置文件中启用/禁用高性能推理插件](#24-在配置文件中启用禁用高性能推理插件)。如果 CLI 参数、API 参数以及配置文件中均未指定 `use_hpip`,默认不启用高性能推理插件
 
 ## 2. 进阶使用方法
 
-本节介绍高性能推理插件的进阶使用方法,适合对模型部署有一定了解或希望进行手动配置调优的用户。用户可以参照配置说明和示例,根据自身需求自定义使用高性能推理插件。接下来将对进阶使用方法进行详细介绍。
+本节介绍高性能推理插件的进阶使用方法,适合对模型部署有一定了解或希望进行手动配置调优的用户。用户可以参照配置说明和示例,根据自身需求自定义使用高性能推理插件。接下来将对高性能推理插件进阶使用方法进行详细介绍。
 
 ### 2.1 高性能推理工作模式
 
@@ -218,7 +214,7 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
 
 ### 2.2 高性能推理配置
 
-常用高性能推理配置包含以下配置项:
+常用高性能推理配置项包括
 
 <table>
 <thead>
@@ -346,7 +342,6 @@ hpi_config:
 paddlex \
     --pipeline image_classification \
     --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    --device gpu:0 \
     --use_hpip \
     --hpi_config '{"backend": "onnxruntime"}'
 ```
@@ -359,7 +354,6 @@ from paddlex import create_pipeline
 
 pipeline = create_pipeline(
     pipeline="OCR",
-    device="gpu",
     use_hpip=True,
     hpi_config={"backend": "onnxruntime"}
 )
@@ -387,7 +381,6 @@ python main.py \
     -o Global.mode=predict \
     -o Predict.model_dir=None \
     -o Predict.input=https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg \
-    -o Global.device=gpu:0 \
     -o Predict.use_hpip=True \
     -o Predict.hpi_config='{"backend": "onnxruntime"}'
 ```
@@ -400,7 +393,6 @@ from paddlex import create_model
 
 model = create_model(
     model_name="ResNet18",
-    device="gpu",
     use_hpip=True,
     hpi_config={"backend": "onnxruntime"}
 )
@@ -465,9 +457,9 @@ Predict:
 
 </details>
 
-### 2.4 高性能推理插件在子产线/子模块中的启用/禁用
+### 2.4 在配置文件中启用/禁用高性能推理插件
 
-高性能推理支持通过在子产线/子模块级别使用 `use_hpip`,实现**仅产线中的某个子产线/子模块使用高性能推理**。示例如下:
+在配置文件中,可以使用 `use_hpip` 控制高性能推理插件的启用和禁用。与通过 CLI 和 API 配置不同的是,这种方式支持通过在子产线/子模块级别使用 `use_hpip`,实现**仅产线中的某个子产线/子模块使用高性能推理**。示例如下:
 
 **通用OCR产线的 `text_detection` 模块使用高性能推理,`text_recognition` 模块不使用高性能推理:**
 
@@ -490,7 +482,7 @@ SubModules:
 
 **注意:**
 
-1. 在子产线或子模块中设置 `use_hpip` 时,将以最深层的配置为准。
+1. 在配置文件中的多个层级设置 `use_hpip` 时,将以最深层的配置为准。
 2. **当通过修改产线配置文件的方式启用/禁用高性能推理插件时,不建议同时使用 CLI 或 Python API 的方式进行设置。** 通过 CLI 或 Python API 设置 `use_hpip` 等同于修改配置文件顶层的 `use_hpip`。
 
 ### 2.5 模型缓存说明
@@ -544,7 +536,7 @@ SubModules:
 
 示例:
 
-```shell
+```bash
 # 构建
 cd PaddleX/libs/ultra-infer/scripts/linux
 # export PYTHON_VERSION=...

+ 6 - 2
docs/pipeline_deploy/serving.en.md

@@ -86,13 +86,17 @@ The command-line options related to serving are as follows:
 <td><code>--use_hpip</code></td>
 <td>If specified, enables the high-performance inference plugin.</td>
 </tr>
+<tr>
+<td><code>--hpi_config</code></td>
+<td>High-performance inference configuration</td>
+</tr>
 </tbody>
 </table>
 </table>
 
 In application scenarios where strict requirements are placed on service response time, the PaddleX high-performance inference plugin can be used to accelerate model inference and pre/post-processing, thereby reducing response time and increasing throughput.
 
-To use the PaddleX high-performance inference plugin, please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md). Note that not all pipelines, models, and environments support the use of the high-performance inference plugin. For detailed information on supported pipelines and models, please refer to the section on supported pipelines and models for high-performance inference plugins.
+To use the PaddleX high-performance inference plugin, please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md).
 
 You can use the `--use_hpip` flag to enable the high-performance inference plugin. An example is as follows:
 
@@ -330,7 +334,7 @@ docker run \
 - If CPU deployment is required, there is no need to specify `--gpus`.
 - If you need to enter the container for debugging, you can replace `/bin/bash server.sh` in the command with `/bin/bash`. Then execute `/bin/bash server.sh` inside the container.
 - If you want the server to run in the background, you can replace `-it` in the command with `-d`. After the container starts, you can view the container logs with `docker logs -f {container ID}`.
-- Add `-e PADDLEX_USE_HPIP=1` to use the PaddleX high-performance inference plugin to accelerate the pipeline inference process. However, please note that not all pipelines support using the high-performance inference plugin. Please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md) for more information.
+- Add `-e PADDLEX_USE_HPIP=1` to use the PaddleX high-performance inference plugin to accelerate the pipeline inference process. Please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md) for more information.
 
 You may observe output similar to the following:
 

+ 6 - 2
docs/pipeline_deploy/serving.md

@@ -86,13 +86,17 @@ INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
 <td><code>--use_hpip</code></td>
 <td>如果指定,则启用高性能推理插件。</td>
 </tr>
+<tr>
+<td><code>--hpi_config</code></td>
+<td>高性能推理配置。</td>
+</tr>
 </tbody>
 </table>
 </table>
 
 在对于服务响应时间要求较严格的应用场景中,可以使用 PaddleX 高性能推理插件对模型推理及前后处理进行加速,从而降低响应时间、提升吞吐量。
 
-使用 PaddleX 高性能推理插件,请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 。不是所有的产线、模型和环境都支持使用高性能推理插件。支持的详细情况请参考支持使用高性能推理插件的产线与模型部分。
+使用 PaddleX 高性能推理插件,请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 。
 
 可以通过指定 `--use_hpip` 以使用高性能推理插件。示例如下:
 
@@ -330,7 +334,7 @@ docker run \
 - 如果希望使用 CPU 部署,则不需要指定 `--gpus`。
 - 如果需要进入容器内部调试,可以将命令中的 `/bin/bash server.sh` 替换为 `/bin/bash`,然后在容器中执行 `/bin/bash server.sh`。
 - 如果希望服务器在后台运行,可以将命令中的 `-it` 替换为 `-d`。容器启动后,可通过 `docker logs -f {容器 ID}` 查看容器日志。
-- 在命令中添加 `-e PADDLEX_USE_HPIP=1` 可以使用 PaddleX 高性能推理插件加速产线推理过程。但请注意,并非所有产线都支持使用高性能推理插件。请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 获取更多信息。
+- 在命令中添加 `-e PADDLEX_USE_HPIP=1` 可以使用 PaddleX 高性能推理插件加速产线推理过程。请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 获取更多信息。
 
 可观察到类似下面的输出信息:
 

+ 2 - 0
docs/pipeline_usage/instructions/pipeline_CLI_usage.en.md

@@ -25,6 +25,8 @@ This single step completes the inference prediction and saves the prediction res
 * `input`: The path to the data file to be predicted, supporting local file paths, local directories containing data files to be predicted, and file URL links;
 * `device`: Used to set the inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". By default, using 0 id GPU if available, otherwise CPU;
 * `save_path`: The save path for prediction results. By default, the prediction results will not be saved;
+* `use_hpip`: Enable the high-performance inference plugin;
+* `hpi_config`: The high-performance inference configuration;
 * _`inference hyperparameters`_: Different pipelines support different inference hyperparameter settings. And the priority of this parameter is greater than the pipeline default configuration. Such as the image classification pipeline, it supports `topk` parameter. Please refer to the specific pipeline description document for details.
 
 ### 2. Custom Pipeline Configuration

+ 3 - 1
docs/pipeline_usage/instructions/pipeline_CLI_usage.md

@@ -26,7 +26,9 @@ paddlex --pipeline image_classification \
 * `input`:待预测数据文件路径,支持本地文件路径、包含待预测数据文件的本地目录、文件URL链接;
 * `device`:用于设置模型推理设备,如为 GPU 则可以指定卡号,如 “cpu”、“gpu:2”,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
 * `save_path`:预测结果的保存路径,默认情况下,不保存预测结果;
-* _`推理超参数`_:不同产线根据具体情况提供了不同的推理超参数设置,该参数优先级大于产线配置文件。对于图像分类产线,则支持通过 `topk` 参数设置输出的前 k 个预测结果。其他产线请参考对应的产线说明文档;
+* `use_hpip`:启用高性能推理插件;
+* `hpi_config`:高性能推理配置;
+* _`推理超参数`_:不同产线根据具体情况提供了不同的推理超参数设置,该参数优先级大于产线配置文件。对于图像分类产线,则支持通过 `topk` 参数设置输出的前 k 个预测结果。其他产线请参考对应的产线说明文档。
 
 ### 2. 自定义产线配置
 

+ 2 - 0
docs/pipeline_usage/instructions/pipeline_python_API.en.md

@@ -34,6 +34,8 @@ In short, there are only three steps:
     * `pipeline`: `str` type, the pipeline name or the local pipeline configuration file path, such as "image_classification", "/path/to/image_classification.yaml";
     * `device`: `str` type, used to set the inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". By default, using 0 id GPU if available, otherwise CPU;
     * `pp_option`: `PaddlePredictorOption` type, used to change inference settings (e.g. the operating mode). Please refer to [4-Inference Configuration](#4-inference-configuration) for more details;
+    * `use_hpip`:`bool | None` type, whether to enable the high-performance inference plugin (`None` for using the setting from the configuration file);
+    * `hpi_config`:`dict | None` type, high-performance inference configuration;
   * Return Value: `BasePredictor` type.
 
 ### 2. Perform Inference by Calling the `predict()` Method of the Prediction Model Pipeline Object

+ 2 - 0
docs/pipeline_usage/instructions/pipeline_python_API.md

@@ -35,6 +35,8 @@ for res in output:
     * `pipeline`:`str` 类型,产线名或是本地产线配置文件路径,如“image_classification”、“/path/to/image_classification.yaml”;
     * `device`:`str` 类型,用于设置模型推理设备,如为 GPU 则可以指定卡号,如“cpu”、“gpu:2”,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
     * `pp_option`:`PaddlePredictorOption` 类型,用于改变运行模式等配置项,关于推理配置的详细说明,请参考下文[4-推理配置](#4-推理配置);
+    * `use_hpip`:`bool | None` 类型,是否启用高性能推理插件(`None` 表示使用配置文件中的配置);
+    * `hpi_config`:`dict | None` 类型,高性能推理配置;
   * 返回值:`BasePredictor`类型。
 
 ### 2. 调用预测模型产线对象的`predict()`方法进行推理预测

+ 10 - 2
docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.en.md

@@ -225,9 +225,17 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference, only available when the pipeline supports high-performance inference.</td>
+<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
 <td><code>bool</code></td>
-<td><code>False</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

+ 11 - 3
docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.md

@@ -216,9 +216,17 @@ python paddlex/inference/models/3d_bev_detection/visualizer_3d.py --save_path=".
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理,仅当该产线支持高性能推理时可用。</td>
-<td><code>bool</code></td>
-<td><code>False</code></td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td><code>bool</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

+ 10 - 2
docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.en.md

@@ -221,9 +221,17 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference, available only when the pipeline supports high-performance inference.</td>
+<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
 <td><code>bool</code></td>
-<td><code>False</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

+ 11 - 3
docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.md

@@ -221,9 +221,17 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理,仅当该产线支持高性能推理时可用。</td>
-<td><code>bool</code></td>
-<td><code>False</code></td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td><code>bool</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

+ 10 - 2
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.en.md

@@ -189,9 +189,17 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. This is only available if the pipeline supports high-performance inference.</td>
+<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
 <td><code>bool</code></td>
-<td><code>False</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

+ 11 - 3
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.md

@@ -188,9 +188,17 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理,仅当该产线支持高性能推理时可用。</td>
-<td><code>bool</code></td>
-<td><code>False</code></td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td><code>bool</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>高性能推理配置</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>无</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

+ 10 - 2
docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md

@@ -208,9 +208,17 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable high-performance inference. This is only available if the pipeline supports high-performance inference.</td>
+<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
 <td><code>bool</code></td>
-<td><code>False</code></td>
+<td>None</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>hpi_config</code></td>
+<td>High-performance inference configuration</td>
+<td><code>dict</code> | <code>None</code></td>
+<td>None</td>
+<td><code>None</code></td>
 </tr>
 </tbody>
 </table>

Some files were not shown because too many files changed in this diff