Przeglądaj źródła

update docs (#3365)

* update

* fix format to adapt html
zhang-prog 9 miesięcy temu
rodzic
commit
cddf9b04b2

+ 16 - 16
README.md

@@ -165,7 +165,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.html">时序预测</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/105706/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -175,7 +175,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.html">时序异常检测</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/105708/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -185,7 +185,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.html">时序分类</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/105707/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -235,7 +235,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.html">行人属性识别</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/387978/webUI?source=appCenter">链接</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -245,7 +245,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.html">车辆属性识别</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/387979/webUI?source=appCenter">链接</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -264,7 +264,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.html">人体关键点检测</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -274,7 +274,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_detection.html">开放词汇检测</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -284,7 +284,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_segmentation.html">开放词汇分割</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -294,7 +294,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.html">旋转目标检测</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -304,7 +304,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.html">3D多模态融合检测</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -314,7 +314,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.html">通用表格识别v2</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -334,7 +334,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/ocr_pipelines/layout_parsing_v2.html">通用版面解析v2</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -344,7 +344,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.html">文档图像预处理</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -374,7 +374,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/speech_pipelines/multilingual_speech_recognition.html">多语种语音识别</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -384,7 +384,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/video_pipelines/video_classification.html">通用视频分类</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -394,7 +394,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/tutorials/video_pipelines/video_detection.html">通用视频检测</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>

+ 16 - 16
README_en.md

@@ -163,7 +163,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.html">Time Series Forecasting</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/105706/webUI?source=appMineRecent">Link</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -173,7 +173,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.html">Time Series Anomaly Detection</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/105708/webUI?source=appMineRecent">Link</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -183,7 +183,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.html">Time Series Classification</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/105707/webUI?source=appMineRecent">Link</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -213,7 +213,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute.html">Pedestrian Attribute Recognition</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/387978/webUI?source=appCenter">Link</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -223,7 +223,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute.html">Vehicle Attribute Recognition</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/387979/webUI?source=appCenter">Link</a></td>
         <td>✅</td>
-        <td>🚧</td>
+        <td></td>
         <td>✅</td>
         <td>🚧</td>
         <td>✅</td>
@@ -262,7 +262,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.html">Human Keypoint Detection</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -272,7 +272,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_detection.html">Open Vocabulary Detection</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -282,7 +282,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_segmentation.html">Open Vocabulary Segmentation</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -292,7 +292,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.html">Rotated Object Detection</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -302,7 +302,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.html">3D Bev Detection</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -312,7 +312,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.html">Table Recognition v2</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -332,7 +332,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/ocr_pipelines/layout_parsing_v2.html">Layout Parsing v2</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -342,7 +342,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.html">Document Image Preprocessing</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -372,7 +372,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/speech_pipelines/multilingual_speech_recognition.html">Multilingual Speech Recognition</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -382,7 +382,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/video_pipelines/video_classification.html">Video Classification</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>
@@ -392,7 +392,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
     <tr>
         <td><a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/video_pipelines/video_detection.html">Video Detection</a></td>
         <td>🚧</td>
-        <td></td>
+        <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
         <td>🚧</td>

+ 47 - 11
docs/pipeline_deploy/high_performance_inference.md

@@ -300,17 +300,53 @@ python -m pip install ../../python/dist/ultra_infer*.whl
 ```
 
 编译时可根据需求修改如下选项:
-| 选项 | 说明 |
-|:------------------------|:------------------------------------|
-| http_proxy             | 在下载三方库时使用具体的http代理,默认空 |
-| PYTHON_VERSION | Python版本,默认 `3.10.0` |
-| WITH_GPU | 是否编译支持Nvidia-GPU,默认 `ON` |
-| ENABLE_ORT_BACKEND      | 是否编译集成ONNX Runtime后端,默认 `ON` |
-| ENABLE_PADDLE_BACKEND   | 是否编译集成Paddle Inference后端,默认 `ON` |
-| ENABLE_TRT_BACKEND   | 是否编译集成TensorRT后端,默认 `ON` |
-| ENABLE_OPENVINO_BACKEND | 是否编译集成OpenVINO后端(仅支持CPU),默认 `ON` |
-| ENABLE_VISION           | 是否编译集成视觉模型的部署模块,默认 `ON` |
-| ENABLE_TEXT             | 是否编译集成文本NLP模型的部署模块,默认 `ON` |
+
+<table>
+    <thead>
+        <tr>
+            <th>选项</th>
+            <th>说明</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td>http_proxy</td>
+            <td>在下载三方库时使用具体的http代理,默认空</td>
+        </tr>
+        <tr>
+            <td>PYTHON_VERSION</td>
+            <td>Python版本,默认 <code>3.10.0</code></td>
+        </tr>
+        <tr>
+            <td>WITH_GPU</td>
+            <td>是否编译支持Nvidia-GPU,默认 <code>ON</code></td>
+        </tr>
+        <tr>
+            <td>ENABLE_ORT_BACKEND</td>
+            <td>是否编译集成ONNX Runtime后端,默认 <code>ON</code></td>
+        </tr>
+        <tr>
+            <td>ENABLE_PADDLE_BACKEND</td>
+            <td>是否编译集成Paddle Inference后端,默认 <code>ON</code></td>
+        </tr>
+        <tr>
+            <td>ENABLE_TRT_BACKEND</td>
+            <td>是否编译集成TensorRT后端,默认 <code>ON</code></td>
+        </tr>
+        <tr>
+            <td>ENABLE_OPENVINO_BACKEND</td>
+            <td>是否编译集成OpenVINO后端(仅支持CPU),默认 <code>ON</code></td>
+        </tr>
+        <tr>
+            <td>ENABLE_VISION</td>
+            <td>是否编译集成视觉模型的部署模块,默认 <code>ON</code></td>
+        </tr>
+        <tr>
+            <td>ENABLE_TEXT</td>
+            <td>是否编译集成文本NLP模型的部署模块,默认 <code>ON</code></td>
+        </tr>
+    </tbody>
+</table>
 
 ## 3. 支持使用高性能推理插件的产线与模型
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md

@@ -855,7 +855,7 @@ Below are the API references for basic service-based deployment and multi-langua
 <tr>
 <td><code>outputImages</code></td>
 <td><code>object</code> | <code>null</code></td>
-<td>A key-value pair of input images and predicted result images. The images are in JPEG format and are Base64-encoded.</td>
+<td>See the description of the <code>img</code> attribute in the result of the pipeline prediction. The images are in JPEG format and are Base64-encoded.</td>
 </tr>
 <tr>
 <td><code>inputImage</code> | <code>null</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md

@@ -1279,7 +1279,7 @@ Below are the API reference and multi-language service invocation examples for t
 <tr>
 <td><code>outputImages</code></td>
 <td><code>object</code> | <code>null</code></td>
-<td>A key-value pair of the input image and the prediction result image. The images are in JPEG format and encoded in Base64.</td>
+<td>See the description of the <code>img</code> attribute in the result of the pipeline prediction. The images are in JPEG format and encoded in Base64.</td>
 </tr>
 <tr>
 <td><code>inputImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing_v2.en.md

@@ -1610,7 +1610,7 @@ Below is the API reference for basic service-oriented deployment and examples of
 <tr>
 <td><code>outputImages</code></td>
 <td><code>object</code> | <code>null</code></td>
-<td>A key-value pair of input images and prediction result images. The images are in JPEG format and are Base64-encoded.</td>
+<td>See the description of the <code>img</code> attribute in the result of the pipeline prediction. The images are in JPEG format and are Base64-encoded.</td>
 </tr>
 <tr>
 <td><code>inputImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md

@@ -1163,7 +1163,7 @@ Below are the API references for basic service-oriented deployment and multi-lan
 <tr>
 <td><code>outputImages</code></td>
 <td><code>object</code> | <code>null</code></td>
-<td>A key-value pair of the input image and the prediction result image. The images are in JPEG format and encoded in Base64.</td>
+<td>See the description of the <code>img</code> attribute in the result of the pipeline prediction. The images are in JPEG format and encoded in Base64.</td>
 </tr>
 <tr>
 <td><code>inputImage</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md

@@ -605,7 +605,7 @@ Below are the API reference and multi-language service invocation examples for t
 <tr>
 <td><code>outputImages</code></td>
 <td><code>object</code> | <code>null</code></td>
-<td>A key-value pair of the input image and the predicted result image. The images are in JPEG format and encoded in Base64.</td>
+<td>See the description of the <code>img</code> attribute in the result of the pipeline prediction. The images are in JPEG format and encoded in Base64.</td>
 </tr>
 <tr>
 <td><code>inputImage</code></td>

+ 2 - 3
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md

@@ -953,7 +953,7 @@ In the above Python script, the following steps are executed:
         - `boxes`: `(List[Dict])` List of detection boxes for layout seal regions, each element in the list contains the following fields
             - `cls_id`: `(int)` The class ID of the detection box
             - `score`: `(float)` The confidence score of the detection box
-            - `coordinate`: `(List[float])` The coordinates of the four corners of the detection box, in the order of x1, y1, x2, y2, representing the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner  
+            - `coordinate`: `(List[float])` The coordinates of the four corners of the detection box, in the order of x1, y1, x2, y2, representing the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner
     - `doc_preprocessor_res`: `(Dict[str, Union[str, Dict[str, bool], int]])` The output result of the document preprocessing sub-pipeline. Exists only when `use_doc_preprocessor=True`
         - `input_path`: `(Union[str, None])` The image path accepted by the image preprocessing sub-pipeline, saved as `None` when the input is `numpy.ndarray`
         - `model_settings`: `(Dict)` Model configuration parameters for the preprocessing sub-pipeline
@@ -1258,7 +1258,7 @@ Below are the API references and multi-language service call examples for basic
 <tr>
 <td><code>outputImages</code></td>
 <td><code>object</code> | <code>null</code></td>
-<td>A key-value pair of input images and prediction result images. Images are in JPEG format and encoded with Base64.</td>
+<td>See the description of the <code>img</code> attribute in the result of the pipeline prediction. Images are in JPEG format and encoded with Base64.</td>
 </tr>
 <tr>
 <td><code>inputImage</code></td>
@@ -1454,4 +1454,3 @@ paddlex --pipeline table_recognition_v2 \
 ```
 
 If you want to use the General Table Recognition pipeline v2 on a wider variety of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).
-