Browse Source

[Docs] Remove authentication instructions in docs (#3791)

* Remove authentication instructions in docs

* Fix ultra-infer cmake

* Reject CUDA 12 when installing hpip

* Refine msg

* Also remove

* Normalize package name

* Decouple

* Bump paddle2onnx version to 2.0.0a5

* No questionaires
Lin Manhui 7 months ago
parent
commit
0afce8098f
32 changed files with 184 additions and 233 deletions
  1. 35 59
      docs/pipeline_deploy/serving.en.md
  2. 35 60
      docs/pipeline_deploy/serving.md
  3. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.en.md
  4. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.md
  5. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.en.md
  6. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.md
  7. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.en.md
  8. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md
  9. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.en.md
  10. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.md
  11. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.en.md
  12. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.md
  13. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md
  14. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md
  15. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md
  16. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md
  17. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md
  18. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md
  19. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md
  20. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md
  21. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md
  22. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.md
  23. 2 2
      libs/ultra-infer/CMakeLists.txt
  24. 2 7
      libs/ultra-infer/cmake/openvino.cmake
  25. 1 1
      paddlex/inference/models/common/static_infer.py
  26. 1 1
      paddlex/inference/serving/basic_serving/_pipeline_apps/_common/common.py
  27. 14 49
      paddlex/inference/utils/hpi.py
  28. 51 0
      paddlex/inference/utils/model_paths.py
  29. 3 0
      paddlex/paddlex_cli.py
  30. 2 2
      paddlex/repo_manager/repo.py
  31. 1 1
      paddlex/utils/deps.py
  32. 17 31
      paddlex/utils/env.py

+ 35 - 59
docs/pipeline_deploy/serving.en.md

@@ -92,7 +92,7 @@ The command-line options related to serving are as follows:
 
 In application scenarios where strict requirements are placed on service response time, the PaddleX high-performance inference plugin can be used to accelerate model inference and pre/post-processing, thereby reducing response time and increasing throughput.
 
-To use the PaddleX high-performance inference plugin, please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md) for instructions on installing the high-performance inference plugin, obtaining a serial number, and completing the activation process. Additionally, note that not all pipelines, models, and environments support the use of the high-performance inference plugin. For detailed information on supported pipelines and models, please refer to the section on supported pipelines and models for high-performance inference plugins.
+To use the PaddleX high-performance inference plugin, please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md). Note that not all pipelines, models, and environments support the use of the high-performance inference plugin. For detailed information on supported pipelines and models, please refer to the section on supported pipelines and models for high-performance inference plugins.
 
 You can use the `--use_hpip` flag to enable the high-performance inference plugin. An example is as follows:
 
@@ -124,147 +124,133 @@ Find the high-stability serving SDK corresponding to the pipeline in the table b
 <tbody>
 <tr>
 <td>PP-ChatOCR-doc v3</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz">paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz">paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General image classification</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_image_classification_sdk.tar.gz">paddlex_hps_image_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_image_classification_sdk.tar.gz">paddlex_hps_image_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General object detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_object_detection_sdk.tar.gz">paddlex_hps_object_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_object_detection_sdk.tar.gz">paddlex_hps_object_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General instance segmentation</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_instance_segmentation_sdk.tar.gz">paddlex_hps_instance_segmentation_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_instance_segmentation_sdk.tar.gz">paddlex_hps_instance_segmentation_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General semantic segmentation</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_semantic_segmentation_sdk.tar.gz">paddlex_hps_semantic_segmentation_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_semantic_segmentation_sdk.tar.gz">paddlex_hps_semantic_segmentation_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Image multi-label classification</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_image_multilabel_classification_sdk.tar.gz">paddlex_hps_image_multilabel_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_image_multilabel_classification_sdk.tar.gz">paddlex_hps_image_multilabel_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General image recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_PP-ShiTuV2_sdk.tar.gz">paddlex_hps_PP-ShiTuV2_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_PP-ShiTuV2_sdk.tar.gz">paddlex_hps_PP-ShiTuV2_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Pedestrian attribute recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz">paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz">paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Vehicle attribute recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz">paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz">paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Face recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_face_recognition_sdk.tar.gz">paddlex_hps_face_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_face_recognition_sdk.tar.gz">paddlex_hps_face_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Small object detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_small_object_detection_sdk.tar.gz">paddlex_hps_small_object_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_small_object_detection_sdk.tar.gz">paddlex_hps_small_object_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Image anomaly detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_anomaly_detection_sdk.tar.gz">paddlex_hps_anomaly_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_anomaly_detection_sdk.tar.gz">paddlex_hps_anomaly_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Human keypoint detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_human_keypoint_detection_sdk.tar.gz">paddlex_hps_human_keypoint_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_human_keypoint_detection_sdk.tar.gz">paddlex_hps_human_keypoint_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Open vocabulary detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_open_vocabulary_detection_sdk.tar.gz">paddlex_hps_open_vocabulary_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_open_vocabulary_detection_sdk.tar.gz">paddlex_hps_open_vocabulary_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Open vocabulary segmentation</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz">paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz">paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Rotated object detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_rotated_object_detection_sdk.tar.gz">paddlex_hps_rotated_object_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_rotated_object_detection_sdk.tar.gz">paddlex_hps_rotated_object_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>3D multi-modal fusion detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_3d_bev_detection_sdk.tar.gz">paddlex_hps_3d_bev_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_3d_bev_detection_sdk.tar.gz">paddlex_hps_3d_bev_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General OCR</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_OCR_sdk.tar.gz">paddlex_hps_OCR_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_OCR_sdk.tar.gz">paddlex_hps_OCR_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General table recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_table_recognition_sdk.tar.gz">paddlex_hps_table_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_table_recognition_sdk.tar.gz">paddlex_hps_table_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General table recognition v2</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_table_recognition_v2_sdk.tar.gz">paddlex_hps_table_recognition_v2_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_table_recognition_v2_sdk.tar.gz">paddlex_hps_table_recognition_v2_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General layout parsing</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_layout_parsing_sdk.tar.gz">paddlex_hps_layout_parsing_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_layout_parsing_sdk.tar.gz">paddlex_hps_layout_parsing_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>PP-StructureV3</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_PP-StructureV3_sdk.tar.gz">paddlex_hps_PP-StructureV3_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_PP-StructureV3_sdk.tar.gz">paddlex_hps_PP-StructureV3_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Formula recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_formula_recognition_sdk.tar.gz">paddlex_hps_formula_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_formula_recognition_sdk.tar.gz">paddlex_hps_formula_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Seal text recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_seal_recognition_sdk.tar.gz">paddlex_hps_seal_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_seal_recognition_sdk.tar.gz">paddlex_hps_seal_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Document image preprocessing</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_doc_preprocessor_sdk.tar.gz">paddlex_hps_doc_preprocessor_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_doc_preprocessor_sdk.tar.gz">paddlex_hps_doc_preprocessor_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Time series forecasting</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_ts_forecast_sdk.tar.gz">paddlex_hps_ts_forecast_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_ts_forecast_sdk.tar.gz">paddlex_hps_ts_forecast_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Time series anomaly detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_ts_anomaly_detection_sdk.tar.gz">paddlex_hps_ts_anomaly_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_ts_anomaly_detection_sdk.tar.gz">paddlex_hps_ts_anomaly_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Time series classification</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_ts_classification_sdk.tar.gz">paddlex_hps_ts_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_ts_classification_sdk.tar.gz">paddlex_hps_ts_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>Multilingual speech recognition</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_multilingual_speech_recognition_sdk.tar.gz">paddlex_hps_multilingual_speech_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_multilingual_speech_recognition_sdk.tar.gz">paddlex_hps_multilingual_speech_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General video classification</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_video_classification_sdk.tar.gz">paddlex_hps_video_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_video_classification_sdk.tar.gz">paddlex_hps_video_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>General video detection</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_video_detection_sdk.tar.gz">paddlex_hps_video_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_video_detection_sdk.tar.gz">paddlex_hps_video_detection_sdk.tar.gz</a></td>
 </tr>
 </tbody>
 </table>
 </details>
 
-### 2.2 Obtain the Serial Number
-
-Using the high-stability serving solution requires obtaining a serial number and completing activation on the machine used for deployment. The method to obtain the serial number is as follows:
-
-On [Baidu AI Studio Community - AI Learning and Training Platform](https://aistudio.baidu.com/paddlex/commercialization), under the "开源模型产线部署序列号咨询与获取" (open-source pipeline deployment serial number inquiry and acquisition) section, select "立即获取" (acquire now) as shown in the following image:
-
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipeline_deploy/image-1.png"/>
-
-Select the pipeline you wish to deploy and click "获取" (acquire). Afterwards, you can find the acquired serial number in the "开源产线部署SDK序列号管理" (open-source pipeline deployment SDK serial number management) section at the bottom of the page:
-
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipeline_deploy/image-2.png"/>
-
-**Please note**: Each serial number can only be bound to a unique device fingerprint and can only be bound once. This means that if a user deploys a pipeline on different machines, a separate serial number must be prepared for each machine. **The high-stability serving solution is completely free.** PaddleX's authentication mechanism is targeted to count the number of deployments across various pipelines and provide pipeline efficiency analysis for our team through data modeling, so as to optimize resource allocation and improve the efficiency of key pipelines. It is important to note that the authentication process only uses non-sensitive information such as disk partition UUIDs, and PaddleX does not collect sensitive data such as device telemetry data. Therefore, in theory, **the authentication server cannot obtain any sensitive information**.
-
-### 2.3 Adjust Configurations
+### 2.2 Adjust Configurations
 
 The `server/pipeline_config.yaml` file of the the high-stability serving SDK is the pipeline configuration file. Users can modify this file to set the model directory to use, etc.
 
@@ -305,7 +291,7 @@ A common requirement is to adjust the number of execution instances for horizont
 
 For more configuration details, please refer to the [Triton Inference Server documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/model_configuration.html).
 
-### 2.4 Run the Server
+### 2.3 Run the Server
 
 The machine used for deployment needs to have Docker Engine version 19.03 or higher installed.
 
@@ -329,11 +315,7 @@ With the image prepared, navigate to the `server` directory and execute the foll
 docker run \
     -it \
     -e PADDLEX_HPS_DEVICE_TYPE={deployment device type} \
-    -e PADDLEX_HPS_SERIAL_NUMBER={serial number} \
-    -e PADDLEX_HPS_UPDATE_LICENSE=1 \
     -v "$(pwd)":/workspace \
-    -v "${HOME}/.baidu/paddlex/licenses":/root/.baidu/paddlex/licenses \
-    -v /dev/disk/by-uuid:/dev/disk/by-uuid \
     -w /workspace \
     --rm \
     --gpus all \
@@ -346,12 +328,6 @@ docker run \
 
 - The deployment device type can be `cpu` or `gpu`, and the CPU-only image supports only `cpu`.
 - If CPU deployment is required, there is no need to specify `--gpus`.
-- The above commands can only be executed properly after successful activation. PaddleX offers two activation methods: online activation and offline activation. They are detailed as follows:
-
-    - Online activation: Set `PADDLEX_HPS_UPDATE_LICENSE` to `1` for the first execution to enable the program to automatically update the license and complete activation. When executing the command again, you may set `PADDLEX_HPS_UPDATE_LICENSE` to `0` to avoid online license updates.
-    - Offline Activation: Follow the instructions in the serial number management section to obtain the machine’s device fingerprint. Bind the serial number with the device fingerprint to obtain the certificate and complete the activation. For this activation method, you need to manually place the certificate in the `${HOME}/.baidu/paddlex/licenses` directory on the machine (if the directory does not exist, you will need to create it). When using this method, set `PADDLEX_HPS_UPDATE_LICENSE` to `0` to avoid online license updates.
-
-- It is necessary to ensure that the `/dev/disk/by-uuid` directory on the host machine exists and is not empty, and that this directory is correctly mounted in order to perform activation properly.
 - If you need to enter the container for debugging, you can replace `/bin/bash server.sh` in the command with `/bin/bash`. Then execute `/bin/bash server.sh` inside the container.
 - If you want the server to run in the background, you can replace `-it` in the command with `-d`. After the container starts, you can view the container logs with `docker logs -f {container ID}`.
 - Add `-e PADDLEX_USE_HPIP=1` to use the PaddleX high-performance inference plugin to accelerate the pipeline inference process. However, please note that not all pipelines support using the high-performance inference plugin. Please refer to the [PaddleX High-Performance Inference Guide](./high_performance_inference.en.md) for more information.
@@ -364,7 +340,7 @@ I1216 11:37:21.602333 35 http_server.cc:2815] Started HTTPService at 0.0.0.0:800
 I1216 11:37:21.643494 35 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
 ```
 
-### 2.5 Invoke the Service
+### 2.4 Invoke the Service
 
 Currently, only the Python client is supported for calling the service. Supported Python versions are 3.8 to 3.12.
 

+ 35 - 60
docs/pipeline_deploy/serving.md

@@ -92,7 +92,7 @@ INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
 
 在对于服务响应时间要求较严格的应用场景中,可以使用 PaddleX 高性能推理插件对模型推理及前后处理进行加速,从而降低响应时间、提升吞吐量。
 
-使用 PaddleX 高性能推理插件,请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 中安装高性能推理插件、获取序列号与激活部分完成插件的安装与序列号的申请同时,不是所有的产线、模型和环境都支持使用高性能推理插件支持的详细情况请参考支持使用高性能推理插件的产线与模型部分。
+使用 PaddleX 高性能推理插件,请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 。不是所有的产线、模型和环境都支持使用高性能推理插件支持的详细情况请参考支持使用高性能推理插件的产线与模型部分。
 
 可以通过指定 `--use_hpip` 以使用高性能推理插件。示例如下:
 
@@ -124,148 +124,133 @@ paddlex --serve --pipeline image_classification --use_hpip
 <tbody>
 <tr>
 <td>文档场景信息抽取 v3</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz">paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz">paddlex_hps_PP-ChatOCRv3-doc_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用图像分类</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_image_classification_sdk.tar.gz">paddlex_hps_image_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_image_classification_sdk.tar.gz">paddlex_hps_image_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用目标检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_object_detection_sdk.tar.gz">paddlex_hps_object_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_object_detection_sdk.tar.gz">paddlex_hps_object_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用实例分割</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_instance_segmentation_sdk.tar.gz">paddlex_hps_instance_segmentation_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_instance_segmentation_sdk.tar.gz">paddlex_hps_instance_segmentation_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用语义分割</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_semantic_segmentation_sdk.tar.gz">paddlex_hps_semantic_segmentation_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_semantic_segmentation_sdk.tar.gz">paddlex_hps_semantic_segmentation_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用图像多标签分类</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_image_multilabel_classification_sdk.tar.gz">paddlex_hps_image_multilabel_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_image_multilabel_classification_sdk.tar.gz">paddlex_hps_image_multilabel_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用图像识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_PP-ShiTuV2_sdk.tar.gz">paddlex_hps_PP-ShiTuV2_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_PP-ShiTuV2_sdk.tar.gz">paddlex_hps_PP-ShiTuV2_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>行人属性识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz">paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz">paddlex_hps_pedestrian_attribute_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>车辆属性识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz">paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz">paddlex_hps_vehicle_attribute_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>人脸识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_face_recognition_sdk.tar.gz">paddlex_hps_face_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_face_recognition_sdk.tar.gz">paddlex_hps_face_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>小目标检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_small_object_detection_sdk.tar.gz">paddlex_hps_small_object_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_small_object_detection_sdk.tar.gz">paddlex_hps_small_object_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>图像异常检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_anomaly_detection_sdk.tar.gz">paddlex_hps_anomaly_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_anomaly_detection_sdk.tar.gz">paddlex_hps_anomaly_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>人体关键点检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_human_keypoint_detection_sdk.tar.gz">paddlex_hps_human_keypoint_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_human_keypoint_detection_sdk.tar.gz">paddlex_hps_human_keypoint_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>开放词汇检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_open_vocabulary_detection_sdk.tar.gz">paddlex_hps_open_vocabulary_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_open_vocabulary_detection_sdk.tar.gz">paddlex_hps_open_vocabulary_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>开放词汇分割</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz">paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz">paddlex_hps_open_vocabulary_segmentation_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>旋转目标检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_rotated_object_detection_sdk.tar.gz">paddlex_hps_rotated_object_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_rotated_object_detection_sdk.tar.gz">paddlex_hps_rotated_object_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>3D 多模态融合检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_3d_bev_detection_sdk.tar.gz">paddlex_hps_3d_bev_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_3d_bev_detection_sdk.tar.gz">paddlex_hps_3d_bev_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用 OCR</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_OCR_sdk.tar.gz">paddlex_hps_OCR_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_OCR_sdk.tar.gz">paddlex_hps_OCR_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用表格识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_table_recognition_sdk.tar.gz">paddlex_hps_table_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_table_recognition_sdk.tar.gz">paddlex_hps_table_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用表格识别 v2</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_table_recognition_v2_sdk.tar.gz">paddlex_hps_table_recognition_v2_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_table_recognition_v2_sdk.tar.gz">paddlex_hps_table_recognition_v2_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用版面解析</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_layout_parsing_sdk.tar.gz">paddlex_hps_layout_parsing_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_layout_parsing_sdk.tar.gz">paddlex_hps_layout_parsing_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用版面解析 v3</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_PP-StructureV3_sdk.tar.gz">paddlex_hps_PP-StructureV3_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_PP-StructureV3_sdk.tar.gz">paddlex_hps_PP-StructureV3_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>公式识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_formula_recognition_sdk.tar.gz">paddlex_hps_formula_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_formula_recognition_sdk.tar.gz">paddlex_hps_formula_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>印章文本识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_seal_recognition_sdk.tar.gz">paddlex_hps_seal_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_seal_recognition_sdk.tar.gz">paddlex_hps_seal_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>文档图像预处理</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_doc_preprocessor_sdk.tar.gz">paddlex_hps_doc_preprocessor_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_doc_preprocessor_sdk.tar.gz">paddlex_hps_doc_preprocessor_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>时序预测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_ts_forecast_sdk.tar.gz">paddlex_hps_ts_forecast_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_ts_forecast_sdk.tar.gz">paddlex_hps_ts_forecast_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>时序异常检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_ts_anomaly_detection_sdk.tar.gz">paddlex_hps_ts_anomaly_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_ts_anomaly_detection_sdk.tar.gz">paddlex_hps_ts_anomaly_detection_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>时序分类</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_ts_classification_sdk.tar.gz">paddlex_hps_ts_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_ts_classification_sdk.tar.gz">paddlex_hps_ts_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>多语种语音识别</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_multilingual_speech_recognition_sdk.tar.gz">paddlex_hps_multilingual_speech_recognition_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_multilingual_speech_recognition_sdk.tar.gz">paddlex_hps_multilingual_speech_recognition_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用视频分类</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_video_classification_sdk.tar.gz">paddlex_hps_video_classification_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_video_classification_sdk.tar.gz">paddlex_hps_video_classification_sdk.tar.gz</a></td>
 </tr>
 <tr>
 <td>通用视频检测</td>
-<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc0/paddlex_hps_video_detection_sdk.tar.gz">paddlex_hps_video_detection_sdk.tar.gz</a></td>
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/deploy/paddlex_hps/public/sdks/v3.0.0rc1/paddlex_hps_video_detection_sdk.tar.gz">paddlex_hps_video_detection_sdk.tar.gz</a></td>
 </tr>
 </tbody>
 </table>
 </details>
 
-
-### 2.2 获取序列号
-
-使用高稳定性服务化部署方案需要获取序列号,并在用于部署的机器上完成激活。获取序列号的方式如下:
-
-在 [飞桨 AI Studio 星河社区-人工智能学习与实训社区](https://aistudio.baidu.com/paddlex/commercialization) 的“开源模型产线部署序列号咨询与获取”部分选择“立即获取”,如下图所示:
-
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipeline_deploy/image-1.png">
-
-选择需要部署的产线,并点击“获取”。之后,可以在页面下方的“开源产线部署SDK序列号管理”部分找到获取到的序列号:
-
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipeline_deploy/image-2.png">
-
-**请注意**:每个序列号只能绑定到唯一的设备指纹,且只能绑定一次。这意味着用户如果使用不同的机器部署产线,则必须为每台机器准备单独的序列号。**高稳定性服务化部署完全免费。** PaddleX 的鉴权机制核心在于统计各产线的部署数量,并通过数据建模为团队提供产线效能分析,以便进行资源的优化配置和重点产线效率的提升。需要特别说明的是,鉴权过程只使用硬盘分区 UUID 等非敏感信息,PaddleX 也并不采集设备遥测数据等敏感数据,因此理论上**鉴权服务器无法获取到任何敏感信息**。
-
-### 2.3 调整配置
+### 2.2 调整配置
 
 高稳定性服务化部署 SDK 的 `server/pipeline_config.yaml` 文件为产线配置文件。用户可以修改该文件以设置要使用的模型目录等。
 
@@ -306,7 +291,7 @@ paddlex --serve --pipeline image_classification --use_hpip
 
 关于更多配置细节,请参阅 [Triton Inference Server 文档](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/model_configuration.html)。
 
-### 2.4 运行服务器
+### 2.3 运行服务器
 
 用于部署的机器上需要安装有 19.03 或更高版本的 Docker Engine。
 
@@ -330,11 +315,7 @@ paddlex --serve --pipeline image_classification --use_hpip
 docker run \
     -it \
     -e PADDLEX_HPS_DEVICE_TYPE={部署设备类型} \
-    -e PADDLEX_HPS_SERIAL_NUMBER={序列号} \
-    -e PADDLEX_HPS_UPDATE_LICENSE=1 \
     -v "$(pwd)":/workspace \
-    -v "${HOME}/.baidu/paddlex/licenses":/root/.baidu/paddlex/licenses \
-    -v /dev/disk/by-uuid:/dev/disk/by-uuid \
     -w /workspace \
     --rm \
     --gpus all \
@@ -347,12 +328,6 @@ docker run \
 
 - 部署设备类型可以为 `cpu` 或 `gpu`,CPU-only 镜像仅支持 `cpu`。
 - 如果希望使用 CPU 部署,则不需要指定 `--gpus`。
-- 以上命令必须在激活成功后才可以正常执行。PaddleX 提供两种激活方式:离线激活和在线激活。具体说明如下:
-
-    - 联网激活:在第一次执行时设置 `PADDLEX_HPS_UPDATE_LICENSE` 为 `1`,使程序自动更新证书并完成激活。再次执行命令时可以将 `PADDLEX_HPS_UPDATE_LICENSE` 设置为 `0` 以避免联网更新证书。
-    - 离线激活:按照序列号管理部分中的指引,获取机器的设备指纹,并将序列号与设备指纹绑定以获取证书,完成激活。使用这种激活方式,需要手动将证书存放在机器的 `${HOME}/.baidu/paddlex/licenses` 目录中(如果目录不存在,需要创建目录)。使用这种方式时,将 `PADDLEX_HPS_UPDATE_LICENSE` 设置为 `0` 以避免联网更新证书。
-
-- 必须确保宿主机的 `/dev/disk/by-uuid` 存在且非空,并正确挂载该目录,才能正常执行激活。
 - 如果需要进入容器内部调试,可以将命令中的 `/bin/bash server.sh` 替换为 `/bin/bash`,然后在容器中执行 `/bin/bash server.sh`。
 - 如果希望服务器在后台运行,可以将命令中的 `-it` 替换为 `-d`。容器启动后,可通过 `docker logs -f {容器 ID}` 查看容器日志。
 - 在命令中添加 `-e PADDLEX_USE_HPIP=1` 可以使用 PaddleX 高性能推理插件加速产线推理过程。但请注意,并非所有产线都支持使用高性能推理插件。请参考 [PaddleX 高性能推理指南](./high_performance_inference.md) 获取更多信息。
@@ -365,7 +340,7 @@ I1216 11:37:21.602333 35 http_server.cc:2815] Started HTTPService at 0.0.0.0:800
 I1216 11:37:21.643494 35 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
 ```
 
-### 2.5 调用服务
+### 2.4 调用服务
 
 目前,仅支持使用 Python 客户端调用服务。支持的 Python 版本为 3.8 至 3.12。
 

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.en.md

@@ -1458,7 +1458,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the results generated by the pipeline's <code>visual_predict</code> method, with the <code>input_path</code> field removed.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the results generated by the pipeline's <code>visual_predict</code> method, with the <code>input_path</code> and the <code>page_index</code> fields removed.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.md

@@ -1463,7 +1463,7 @@ for res in visual_predict_res:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>visual_predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>visual_predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.en.md

@@ -1595,7 +1595,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the results generated by the pipeline's <code>visual_predict</code> method, with the <code>input_path</code> field removed.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the results generated by the pipeline's <code>visual_predict</code> method, with the <code>input_path</code> and the <code>page_index</code> fields removed.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.md

@@ -1798,7 +1798,7 @@ for res in visual_predict_res:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>visual_predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>visual_predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.en.md

@@ -1073,7 +1073,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>The simplified version of the <code>res</code> field in the JSON representation generated by the <code>predict</code> method of the production object, with the <code>input_path</code> field removed.</td>
+<td>The simplified version of the <code>res</code> field in the JSON representation generated by the <code>predict</code> method of the production object, with the <code>input_path</code> and the <code>page_index</code> fields removed.</td>
 </tr>
 <tr>
 <td><code>ocrImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md

@@ -1068,7 +1068,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>ocrImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.en.md

@@ -1667,7 +1667,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the <code>predict</code> method of the pipeline object, with the <code>input_path</code> field removed.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the <code>predict</code> method of the pipeline object, with the <code>input_path</code> and the <code>page_index</code> fields removed.</td>
 </tr>
 <tr>
 <td><code>markdown</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.md

@@ -1613,7 +1613,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>markdown</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.en.md

@@ -564,7 +564,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the pipeline object's <code>predict</code> method, excluding the <code>input_path</code> field.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the pipeline object's <code>predict</code> method, excluding the <code>input_path</code> and the <code>page_index</code> fields.</td>
 </tr>
 <tr>
 <td><code>docPreprocessingImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.md

@@ -566,7 +566,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>docPreprocessingImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md

@@ -877,7 +877,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the pipeline object's <code>predict</code> method, excluding the <code>input_path</code> field.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the pipeline object's <code>predict</code> method, excluding the <code>input_path</code> and the <code>page_index</code> fields.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md

@@ -873,7 +873,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md

@@ -1445,7 +1445,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation generated by the <code>predict</code> method of the production object, with the <code>input_path</code> field removed.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation generated by the <code>predict</code> method of the production object, with the <code>input_path</code> and the <code>page_index</code> fields removed.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md

@@ -1498,7 +1498,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md

@@ -1253,7 +1253,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation generated by the <code>predict</code> method of the production object, where the <code>input_path</code> field is removed.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation generated by the <code>predict</code> method of the production object, where the <code>input_path</code> and the <code>page_index</code> fields are removed.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md

@@ -1271,7 +1271,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md

@@ -1330,7 +1330,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the <code>predict</code> method of the production object, with the <code>input_path</code> field removed.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the <code>predict</code> method of the production object, with the <code>input_path</code> and the <code>page_index</code> fields removed.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md

@@ -1275,7 +1275,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md

@@ -1413,7 +1413,7 @@ To remove the page limit, please add the following configuration to the pipeline
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the pipeline object's <code>predict</code> method, excluding the <code>input_path</code> field.</td>
+<td>A simplified version of the <code>res</code> field in the JSON representation of the result generated by the pipeline object's <code>predict</code> method, excluding the <code>input_path</code> and the <code>page_index</code> fields.</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.md

@@ -1419,7 +1419,7 @@ for res in output:
 <tr>
 <td><code>prunedResult</code></td>
 <td><code>object</code></td>
-<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 字段</td>
+<td>产线对象的 <code>predict</code> 方法生成结果的 JSON 表示中 <code>res</code> 字段的简化版本,其中去除了 <code>input_path</code> 和 <code>page_index</code> 字段</td>
 </tr>
 <tr>
 <td><code>outputImages</code></td>

+ 2 - 2
libs/ultra-infer/CMakeLists.txt

@@ -296,7 +296,7 @@ if(ENABLE_POROS_BACKEND)
   list(APPEND DEPEND_LIBS external_poros)
   set(PYTHON_MINIMUM_VERSION 3.6)
   set(PYTORCH_MINIMUM_VERSION 1.9)
-  set(TENSORRT_MINIMUM_VERSION 8.0)
+  set(TENSORRT_MINIMUM_VERSION 8.6)
   # find python3
   find_package(Python3 ${PYTHON_MINIMUM_VERSION} REQUIRED COMPONENTS Interpreter Development)
   message(STATUS "Found Python: ${Python3_VERSION_MAJOR}.${Python3_VERSION_MINOR}.${Python3_VERSION_PATCH}")
@@ -312,7 +312,7 @@ if(ENABLE_POROS_BACKEND)
   message(FATAL_ERROR "While -DENABLE_POROS_BACKEND=ON, must set -DWITH_GPU=ON, but now it's OFF")
   endif()
   if(NOT TRT_DIRECTORY)
-    message(FATAL_ERROR "While -DENABLE_POROS_BACKEND=ON, must define -DTRT_DIRECTORY, e.g -DTRT_DIRECTORY=/Downloads/TensorRT-8.4")
+    message(FATAL_ERROR "While -DENABLE_POROS_BACKEND=ON, must define -DTRT_DIRECTORY, e.g -DTRT_DIRECTORY=/Downloads/TensorRT-8.6")
   endif()
   include_directories(${TRT_DIRECTORY}/include)
   find_library(TRT_INFER_LIB nvinfer ${TRT_DIRECTORY}/lib)

+ 2 - 7
libs/ultra-infer/cmake/openvino.cmake

@@ -27,7 +27,7 @@ if (OPENVINO_DIRECTORY)
 else()
   set(OPENVINO_PROJECT "extern_openvino")
 
-  set(OPENVINO_VERSION "2022.2.0.dev20220829")
+  set(OPENVINO_VERSION "2025.0")
   set(OPENVINO_URL_PREFIX "https://bj.bcebos.com/fastdeploy/third_libs/")
 
   set(COMPRESSED_SUFFIX ".tgz")
@@ -47,12 +47,7 @@ else()
     if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "aarch64")
       message("Cannot compile with openvino while in linux-aarch64 platform")
     else()
-      set(OPENVINO_VERSION "dev.2023.03.2")
-      if(NEED_ABI0)
-        set(OPENVINO_FILENAME "openvino-linux-x64-20230302-abi0")
-      else()
-        set(OPENVINO_VERSION "2025.0")
-        set(OPENVINO_FILENAME "openvino-linux-x64-${OPENVINO_VERSION}")
+      set(OPENVINO_FILENAME "openvino-linux-x64-${OPENVINO_VERSION}")
       endif()
     endif()
   endif()

+ 1 - 1
paddlex/inference/models/common/static_infer.py

@@ -30,9 +30,9 @@ from ...utils.hpi import (
     ONNXRuntimeConfig,
     OpenVINOConfig,
     TensorRTConfig,
-    get_model_paths,
     suggest_inference_backend_and_config,
 )
+from ...utils.model_paths import get_model_paths
 from ...utils.pp_option import PaddlePredictorOption
 from ...utils.trt_config import DISABLE_TRT_HALF_OPS_CONFIG
 

+ 1 - 1
paddlex/inference/serving/basic_serving/_pipeline_apps/_common/common.py

@@ -27,7 +27,7 @@ if is_dep_available("opencv-contrib-python"):
 
 
 def prune_result(result: dict) -> dict:
-    KEYS_TO_REMOVE = ["input_path"]
+    KEYS_TO_REMOVE = ["input_path", "page_index"]
 
     def _process_obj(obj):
         if isinstance(obj, dict):

+ 14 - 49
paddlex/inference/utils/hpi.py

@@ -18,15 +18,15 @@ import importlib.util
 import json
 import platform
 from functools import lru_cache
-from os import PathLike
-from pathlib import Path
-from typing import Any, Dict, List, Literal, Optional, Tuple, TypedDict, Union
+from typing import Any, Dict, List, Literal, Optional, Tuple, Union
 
 from pydantic import BaseModel, Field
 from typing_extensions import Annotated, TypeAlias
 
 from ...utils.deps import function_requires_deps, is_paddle2onnx_plugin_available
-from ...utils.flags import USE_PIR_TRT, FLAGS_json_format_model
+from ...utils.env import get_cuda_version, get_cudnn_version, get_paddle_version
+from ...utils.flags import USE_PIR_TRT
+from .model_paths import ModelPaths
 
 
 class PaddleInferenceInfo(BaseModel):
@@ -93,38 +93,6 @@ class ModelInfo(BaseModel):
 ModelFormat: TypeAlias = Literal["paddle", "onnx", "om"]
 
 
-class ModelPaths(TypedDict, total=False):
-    paddle: Tuple[Path, Path]
-    onnx: Path
-    om: Path
-
-
-def get_model_paths(
-    model_dir: Union[str, PathLike], model_file_prefix: str
-) -> ModelPaths:
-    model_dir = Path(model_dir)
-    model_paths: ModelPaths = {}
-    pd_model_path = None
-    if FLAGS_json_format_model:
-        if (model_dir / f"{model_file_prefix}.json").exists():
-            pd_model_path = model_dir / f"{model_file_prefix}.json"
-    else:
-        if (model_dir / f"{model_file_prefix}.json").exists():
-            pd_model_path = model_dir / f"{model_file_prefix}.json"
-        elif (model_dir / f"{model_file_prefix}.pdmodel").exists():
-            pd_model_path = model_dir / f"{model_file_prefix}.pdmodel"
-    if pd_model_path and (model_dir / f"{model_file_prefix}.pdiparams").exists():
-        model_paths["paddle"] = (
-            pd_model_path,
-            model_dir / f"{model_file_prefix}.pdiparams",
-        )
-    if (model_dir / f"{model_file_prefix}.onnx").exists():
-        model_paths["onnx"] = model_dir / f"{model_file_prefix}.onnx"
-    if (model_dir / f"{model_file_prefix}.om").exists():
-        model_paths["om"] = model_dir / f"{model_file_prefix}.om"
-    return model_paths
-
-
 @lru_cache(1)
 def _get_hpi_model_info_collection():
     with importlib.resources.open_text(
@@ -143,7 +111,6 @@ def suggest_inference_backend_and_config(
     # additional important factors, such as NVIDIA GPU compute capability and
     # device manufacturers. We should also allow users to provide hints.
 
-    import paddle
     from ultra_infer import (
         is_built_with_om,
         is_built_with_openvino,
@@ -174,9 +141,12 @@ def suggest_inference_backend_and_config(
     if hpi_config.backend is not None and hpi_config.backend not in available_backends:
         return None, f"Inference backend {repr(hpi_config.backend)} is unavailable."
 
-    paddle_version = paddle.__version__
-    if paddle_version != "3.0.0":
-        return None, f"{repr(paddle_version)} is not a supported Paddle version."
+    paddle_version = get_paddle_version()
+    if paddle_version != (3, 0, 0):
+        return (
+            None,
+            f"{repr('.'.join(paddle_version))} is not a supported Paddle version.",
+        )
 
     if hpi_config.device_type == "cpu":
         uname = platform.uname()
@@ -186,15 +156,10 @@ def suggest_inference_backend_and_config(
         else:
             return None, f"{repr(arch)} is not a supported architecture."
     elif hpi_config.device_type == "gpu":
-        # FIXME: We should not rely on the PaddlePaddle library to detemine CUDA
-        # and cuDNN versions.
-        # Should we inject environment info from the outside?
-        import paddle.version
-
-        cuda_version = paddle.version.cuda()
-        cuda_version = cuda_version.replace(".", "")
-        cudnn_version = paddle.version.cudnn().rsplit(".", 1)[0]
-        cudnn_version = cudnn_version.replace(".", "")
+        cuda_version = get_cuda_version()
+        cuda_version = "".join(map(str, cuda_version))
+        cudnn_version = get_cudnn_version()
+        cudnn_version = "".join(map(str, cudnn_version[:-1]))
         key = f"gpu_cuda{cuda_version}_cudnn{cudnn_version}"
     else:
         return None, f"{repr(hpi_config.device_type)} is not a supported device type."

+ 51 - 0
paddlex/inference/utils/model_paths.py

@@ -0,0 +1,51 @@
+# Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from os import PathLike
+from pathlib import Path
+from typing import Tuple, TypedDict, Union
+
+from ...utils.flags import FLAGS_json_format_model
+
+
+class ModelPaths(TypedDict, total=False):
+    paddle: Tuple[Path, Path]
+    onnx: Path
+    om: Path
+
+
+def get_model_paths(
+    model_dir: Union[str, PathLike], model_file_prefix: str
+) -> ModelPaths:
+    model_dir = Path(model_dir)
+    model_paths: ModelPaths = {}
+    pd_model_path = None
+    if FLAGS_json_format_model:
+        if (model_dir / f"{model_file_prefix}.json").exists():
+            pd_model_path = model_dir / f"{model_file_prefix}.json"
+    else:
+        if (model_dir / f"{model_file_prefix}.json").exists():
+            pd_model_path = model_dir / f"{model_file_prefix}.json"
+        elif (model_dir / f"{model_file_prefix}.pdmodel").exists():
+            pd_model_path = model_dir / f"{model_file_prefix}.pdmodel"
+    if pd_model_path and (model_dir / f"{model_file_prefix}.pdiparams").exists():
+        model_paths["paddle"] = (
+            pd_model_path,
+            model_dir / f"{model_file_prefix}.pdiparams",
+        )
+    if (model_dir / f"{model_file_prefix}.onnx").exists():
+        model_paths["onnx"] = model_dir / f"{model_file_prefix}.onnx"
+    if (model_dir / f"{model_file_prefix}.om").exists():
+        model_paths["om"] = model_dir / f"{model_file_prefix}.om"
+    return model_paths

+ 3 - 0
paddlex/paddlex_cli.py

@@ -30,6 +30,7 @@ from .utils.deps import (
     get_serving_dep_specs,
     require_paddle2onnx_plugin,
 )
+from .utils.env import get_cuda_version
 from .utils.flags import FLAGS_json_format_model
 from .utils.install import install_packages
 from .utils.interactive_get_pipeline import interactive_get_pipeline
@@ -241,6 +242,8 @@ def install(args):
         if device_type == "cpu":
             package = "ultra-infer-python"
         elif device_type == "gpu":
+            if get_cuda_version()[0] != 11:
+                sys.exit("Currently, the CUDA version must be 11.x for GPU devices.")
             package = "ultra-infer-gpu-python"
         elif device_type == "npu":
             package = "ultra-infer-npu-python"

+ 2 - 2
paddlex/repo_manager/repo.py

@@ -379,7 +379,7 @@ class RepositoryGroupInstaller(object):
             if req.name in repo_pkgs:
                 # Skip repo packages
                 continue
-            elif req.name in (
+            elif req.name.replace("_", "-") in (
                 "opencv-python",
                 "opencv-contrib-python",
                 "opencv-python-headless",
@@ -392,7 +392,7 @@ class RepositoryGroupInstaller(object):
                 # HACK
                 line_s = "albumentations @ https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/patched_packages/albumentations-1.4.10%2Bpdx-py3-none-any.whl"
                 line_s += "\nalbucore @ https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/patched_packages/albucore-0.0.13%2Bpdx-py3-none-any.whl"
-            elif req.name in ("nuscenes-devkit", "nuscenes_devkit"):
+            elif req.name.replace("_", "-") == "nuscenes-devkit":
                 # HACK
                 line_s = "nuscenes-devkit @ https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/patched_packages/nuscenes_devkit-1.1.11%2Bpdx-py3-none-any.whl"
             elif req.name == "imgaug":

+ 1 - 1
paddlex/utils/deps.py

@@ -222,4 +222,4 @@ def require_paddle2onnx_plugin():
 
 
 def get_paddle2onnx_spec():
-    return "paddle2onnx == 2.0.0a2"
+    return "paddle2onnx == 2.0.0a5"

+ 17 - 31
paddlex/utils/env.py

@@ -12,9 +12,6 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-import os
-from contextlib import ContextDecorator
-
 
 def get_device_type():
     import paddle
@@ -29,34 +26,23 @@ def get_paddle_version():
     version = paddle.__version__.split(".")
     # ref: https://github.com/PaddlePaddle/Paddle/blob/release/3.0-beta2/setup.py#L316
     assert len(version) == 3
-    major_v, minor_v, patch_v = version
+    major_v, minor_v, patch_v = map(int, version)
     return major_v, minor_v, patch_v
 
 
-class TemporaryEnvVarChanger(ContextDecorator):
-    """
-    A context manager to temporarily change an environment variable's value
-    and restore it upon exiting the context.
-    """
-
-    def __init__(self, var_name, new_value):
-        # if new_value is None, nothing changed
-        self.var_name = var_name
-        self.new_value = new_value
-        self.original_value = None
-
-    def __enter__(self):
-        if self.new_value is None:
-            return self
-        self.original_value = os.environ.get(self.var_name, None)
-        os.environ[self.var_name] = self.new_value
-        return self
-
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        if self.new_value is None:
-            return False
-        if self.original_value is not None:
-            os.environ[self.var_name] = self.original_value
-        elif self.var_name in os.environ:
-            del os.environ[self.var_name]
-        return False
+def get_cuda_version():
+    # FIXME: We should not rely on the PaddlePaddle library to detemine CUDA
+    # versions.
+    import paddle.version
+
+    cuda_version = paddle.version.cuda()
+    return tuple(map(int, cuda_version.split(".")))
+
+
+def get_cudnn_version():
+    # FIXME: We should not rely on the PaddlePaddle library to detemine cuDNN
+    # versions.
+    import paddle.version
+
+    cudnn_version = paddle.version.cudnn()
+    return tuple(map(int, cudnn_version.split(".")))