فهرست منبع

update ultra-infer path (#4401)

Poki Bai 3 ماه پیش
والد
کامیت
2b9cd9670a

+ 1 - 1
.pre-commit-config.yaml

@@ -55,7 +55,7 @@ repos:
     -   id: isort
         args:
             - --profile=black
-        exclude: ^libs/ultra-infer/python/ultra_infer/
+        exclude: ^deploy/ultra-infer/python/ultra_infer/
 
 # check license
 -   repo: local

+ 2 - 2
docs/pipeline_deploy/high_performance_inference.en.md

@@ -505,7 +505,7 @@ When the `auto_paddle2onnx` option is enabled, an `inference.onnx` file may be a
 
 ### 2.6 Customizing the Model Inference Library
 
-`ultra-infer` is the model inference library that the high-performance inference plugin depends on. It is maintained as a sub-project under the `PaddleX/libs/ultra-infer` directory. PaddleX provides a build script for `ultra-infer`, located at `PaddleX/libs/ultra-infer/scripts/linux/set_up_docker_and_build_py.sh`. The build script, by default, builds the GPU version of `ultra-infer` and integrates three inference backends: OpenVINO, TensorRT, and ONNX Runtime.
+`ultra-infer` is the model inference library that the high-performance inference plugin depends on. It is maintained as a sub-project under the `PaddleX/deploy/ultra-infer` directory. PaddleX provides a build script for `ultra-infer`, located at `PaddleX/deploy/ultra-infer/scripts/linux/set_up_docker_and_build_py.sh`. The build script, by default, builds the GPU version of `ultra-infer` and integrates three inference backends: OpenVINO, TensorRT, and ONNX Runtime.
 
 If you need to customize the build of `ultra-infer`, you can modify the following options in the build script according to your requirements:
 
@@ -548,7 +548,7 @@ Example:
 
 ```bash
 # Build
-cd PaddleX/libs/ultra-infer/scripts/linux
+cd PaddleX/deploy/ultra-infer/scripts/linux
 # export PYTHON_VERSION=...
 # export WITH_GPU=...
 # export ENABLE_ORT_BACKEND=...

+ 2 - 2
docs/pipeline_deploy/high_performance_inference.md

@@ -503,7 +503,7 @@ SubModules:
 
 ### 2.6 定制模型推理库
 
-`ultra-infer` 是高性能推理底层依赖的模型推理库,在 `PaddleX/libs/ultra-infer` 目录以子项目形式维护。PaddleX 提供 `ultra-infer` 的构建脚本,位于 `PaddleX/libs/ultra-infer/scripts/linux/set_up_docker_and_build_py.sh` 。编译脚本默认构建 GPU 版本的 `ultra-infer`,集成 OpenVINO、TensorRT、ONNX Runtime 三种推理后端。
+`ultra-infer` 是高性能推理底层依赖的模型推理库,在 `PaddleX/deploy/ultra-infer` 目录以子项目形式维护。PaddleX 提供 `ultra-infer` 的构建脚本,位于 `PaddleX/deploy/ultra-infer/scripts/linux/set_up_docker_and_build_py.sh` 。编译脚本默认构建 GPU 版本的 `ultra-infer`,集成 OpenVINO、TensorRT、ONNX Runtime 三种推理后端。
 
 如果需要自定义构建 `ultra-infer`,可根据需求修改构建脚本的如下选项:
 
@@ -546,7 +546,7 @@ SubModules:
 
 ```bash
 # 构建
-cd PaddleX/libs/ultra-infer/scripts/linux
+cd PaddleX/deploy/ultra-infer/scripts/linux
 # export PYTHON_VERSION=...
 # export WITH_GPU=...
 # export ENABLE_ORT_BACKEND=...

+ 1 - 1
docs/practical_tutorials/high_performance_npu_tutorial.md

@@ -60,7 +60,7 @@ paddlex --install hpi-npu
 * 手动编译安装
 
 ```bash
-cd PaddleX/libs/ultra-infer/python
+cd PaddleX/deploy/ultra-infer/python
 unset http_proxy https_proxy
 # 使能om,onnx后端,禁用paddle后端,禁用gpu
 export ENABLE_OM_BACKEND=ON ENABLE_ORT_BACKEND=ON ENABLE_PADDLE_BACKEND=OFF WITH_GPU=OFF DEVICE_TYPE=NPU