Sfoglia il codice sorgente

update docs (#2274)

* update docs

* delete check_links.py

* fix link in ts_forcasting module

* Update README.md

* Update table_structure_recognition.md

* Update seal_text_detection.md

* Update seal_recognition_en.md

* Update doc_img_orientation_classification.md

* Update layout_detection.md

* Update table_structure_recognition.md

* Update text_detection.md

* Update text_recognition.md

---------

Co-authored-by: cuicheng01 <45199522+cuicheng01@users.noreply.github.com>
AmberC0209 1 anno fa
parent
commit
362d8d0c8a
83 ha cambiato i file con 389 aggiunte e 158 eliminazioni
  1. 10 10
      README.md
  2. 7 7
      README_en.md
  3. 0 0
      docs/module_usage/module_develop_guide.md
  4. 0 1
      docs/module_usage/tutorials/cv_modules/doc_text_orientation.md
  5. 0 0
      docs/module_usage/tutorials/cv_modules/face_features.md
  6. 1 1
      docs/module_usage/tutorials/cv_modules/image_classification_en.md
  7. 0 0
      docs/module_usage/tutorials/cv_modules/image_correction.md
  8. 1 1
      docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md
  9. 1 1
      docs/module_usage/tutorials/cv_modules/ml_classification_en.md
  10. 1 1
      docs/module_usage/tutorials/cv_modules/object_detection_en.md
  11. 1 1
      docs/module_usage/tutorials/cv_modules/small_object_detection_en.md
  12. 1 1
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md
  13. 1 1
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md
  14. 1 1
      docs/module_usage/tutorials/ocr_modules/layout_detection.md
  15. 1 1
      docs/module_usage/tutorials/ocr_modules/layout_detection_en.md
  16. 1 1
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.md
  17. 1 1
      docs/module_usage/tutorials/ocr_modules/seal_text_detection_en.md
  18. 1 1
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md
  19. 2 2
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md
  20. 1 1
      docs/module_usage/tutorials/ocr_modules/text_detection.md
  21. 1 1
      docs/module_usage/tutorials/ocr_modules/text_detection_en.md
  22. 1 1
      docs/module_usage/tutorials/ocr_modules/text_recognition.md
  23. 1 1
      docs/module_usage/tutorials/ocr_modules/text_recognition_en.md
  24. 2 0
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md
  25. 2 0
      docs/module_usage/tutorials/time_series_modules/time_series_classification_en.md
  26. 1 1
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md
  27. 3 1
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting_en.md
  28. 8 7
      docs/pipeline_deploy/edge_deploy.md
  29. 7 6
      docs/pipeline_deploy/edge_deploy_en.md
  30. 2 2
      docs/pipeline_deploy/service_deploy.md
  31. 1 1
      docs/pipeline_deploy/service_deploy_en.md
  32. 3 3
      docs/pipeline_usage/pipeline_develop_guide.md
  33. 2 2
      docs/pipeline_usage/pipeline_develop_guide_en.md
  34. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md
  35. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md
  36. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md
  37. 2 2
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md
  38. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md
  39. 2 2
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md
  40. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md
  41. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md
  42. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md
  43. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md
  44. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md
  45. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md
  46. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md
  47. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md
  48. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md
  49. 2 2
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md
  50. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md
  51. 2 2
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md
  52. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md
  53. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md
  54. 2 2
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md
  55. 3 3
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md
  56. 22 22
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition_en.md
  57. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md
  58. 2 2
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md
  59. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md
  60. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md
  61. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md
  62. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md
  63. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md
  64. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md
  65. 4 4
      docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial.md
  66. 3 3
      docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial_en.md
  67. 1 1
      docs/practical_tutorials/image_classification_garbage_tutorial.md
  68. 1 1
      docs/practical_tutorials/image_classification_garbage_tutorial_en.md
  69. 1 1
      docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.md
  70. 1 1
      docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial_en.md
  71. 1 1
      docs/practical_tutorials/object_detection_fall_tutorial.md
  72. 1 1
      docs/practical_tutorials/object_detection_fall_tutorial_en.md
  73. 1 1
      docs/practical_tutorials/object_detection_fashion_pedia_tutorial.md
  74. 1 1
      docs/practical_tutorials/object_detection_fashion_pedia_tutorial_en.md
  75. 1 1
      docs/practical_tutorials/ocr_det_license_tutorial.md
  76. 1 1
      docs/practical_tutorials/ocr_det_license_tutorial_en.md
  77. 1 1
      docs/practical_tutorials/ocr_rec_chinese_tutorial.md
  78. 1 1
      docs/practical_tutorials/ocr_rec_chinese_tutorial_en.md
  79. 1 1
      docs/practical_tutorials/semantic_segmentation_road_tutorial.md
  80. 1 1
      docs/practical_tutorials/semantic_segmentation_road_tutorial_en.md
  81. 3 3
      docs/support_list/models_list_en.md
  82. 81 9
      docs/support_list/pipelines_list.md
  83. 158 6
      docs/support_list/pipelines_list_en.md

+ 10 - 10
README.md

@@ -27,7 +27,7 @@ PaddleX 3.0 是基于飞桨框架构建的低代码开发工具,它集成了
 | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/b302cd7e-e027-4ea6-86d0-8a4dd6d61f39" height="126px" width="180px"> | <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/multilabel_cls.png" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/099e2b00-0bbe-4b20-9c5a-96b69e473bd2" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/09f683b4-27df-4c24-b8a7-84da20fdd182" height="126px" width="180px"> |
 |                                                              [**通用语义分割**](./docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md)                                                               |                                                            [**图像异常检测**](./docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md)                                                            |                                                         [ **通用OCR**](./docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md)                                                          |                                                          [**通用表格识别**](./docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)                                                          |
 | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/02637f8c-f248-415b-89ab-1276505f198c" height="126px" width="180px"> | <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/image_anomaly_detection.png" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/1ef48536-48d4-484b-a6fb-0d6631ba2386" height="126px" width="180px"> |  <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/1e798e05-dee7-4b41-9cc4-6708b6014efa" height="126px" width="180px"> |
-|                                                              [**文本图像智能分析**](./docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md)                                                              |                                                            [**时序预测**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md)                                                            |                                                              [**时序异常检测**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md)                                                              |                                                         [**时序分类**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md)                                                         |
+|                                                              [**文本图像智能分析**](./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md)                                                              |                                                            [**时序预测**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md)                                                            |                                                              [**时序异常检测**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md)                                                              |                                                         [**时序分类**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md)                                                         |
 | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/e3d97f4e-ab46-411c-8155-494c61492b0a" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/6e897bf6-35fe-45e6-a040-e9a1a20cfdf2" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/c54c66cc-da4f-4631-877b-43b0fbb192a6" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/0ce925b2-3776-4dde-8ce0-5156d5a2476e" height="126px" width="180px"> |
 
 ## 🌟 特性
@@ -57,7 +57,7 @@ PaddleX 3.0 是基于飞桨框架构建的低代码开发工具,它集成了
  ## 📊 能力支持
 
 
-PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在线体验**,您可以快速体验各个产线的预训练模型效果,如果您对产线的预训练模型效果满意,可以直接对产线进行[高性能推理](./docs/pipeline_deploy/high_performance_deploy.md)/[服务化部署](./docs/pipeline_deploy/service_deploy.md)/[端侧部署](./docs/pipeline_deploy/lite_deploy.md),如果不满意,您也可以使用产线的**二次开发**能力,提升效果。完整的产线开发流程请参考[PaddleX产线使用概览](./docs/pipeline_usage/pipeline_develop_guide.md)或各产线使用[教程](#-文档)。
+PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在线体验**,您可以快速体验各个产线的预训练模型效果,如果您对产线的预训练模型效果满意,可以直接对产线进行[高性能推理](./docs/pipeline_deploy/high_performance_deploy.md)/[服务化部署](./docs/pipeline_deploy/service_deploy.md)/[端侧部署](./docs/pipeline_deploy/edge_deploy.md),如果不满意,您也可以使用产线的**二次开发**能力,提升效果。完整的产线开发流程请参考[PaddleX产线使用概览](./docs/pipeline_usage/pipeline_develop_guide.md)或各产线使用[教程](#-文档)。
 
 
 此外,PaddleX 为开发者提供了基于[云端图形化开发界面](https://aistudio.baidu.com/pipeline/mine)的全流程开发工具, 点击【创建产线】,选择对应的任务场景和模型产线,就可以开启全流程开发。详细请参考[教程《零门槛开发产业级AI模型》](https://aistudio.baidu.com/practical/introduce/546656605663301)
@@ -67,7 +67,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <th>模型产线</th>
         <th>在线体验</th>
         <th>快速推理</th>
-        <th>高性能部署</th>
+        <th>高性能推理</th>
         <th>服务化部署</th>
         <th>端侧部署</th>
         <th>二次开发</th>
@@ -481,12 +481,12 @@ for res in output:
 
 | 产线名称           | 对应参数                           | 详细说明                                                                                                                                                         |
 |--------------------|------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| 文档场景信息抽取v3   | `PP-ChatOCRv3-doc`                 | [文档场景信息抽取v3产线Python脚本使用说明](./docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md#22-本地体验) |
+| 文档场景信息抽取v3   | `PP-ChatOCRv3-doc`                 | [文档场景信息抽取v3产线Python脚本使用说明](./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md#22-本地体验) |
 | 通用图像分类       | `image_classification`             | [通用图像分类产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md#222-python脚本方式集成)                                |
 | 通用目标检测       | `object_detection`                 | [通用目标检测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md#222-python脚本方式集成)                                    |
 | 通用实例分割       | `instance_segmentation`            | [通用实例分割产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md#222-python脚本方式集成)                               |
 | 通用语义分割       | `semantic_segmentation`            | [通用语义分割产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md#222-python脚本方式集成)                               |
-| 图像多标签分类 | `multi_label_image_classification` | [通用图像多标签分类产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md#22-python脚本方式集成)               |
+| 图像多标签分类 | `multi_label_image_classification` | [图像多标签分类产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md#22-python脚本方式集成)               |
 | 小目标检测         | `small_object_detection`           | [小目标检测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md#22-python脚本方式集成)                                 |
 | 图像异常检测       | `anomaly_detection`                | [图像异常检测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md#22-python脚本方式集成)                              |
 | 通用OCR            | `OCR`                              | [通用OCR产线Python脚本使用说明](./docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md#222-python脚本方式集成)                                                     |
@@ -494,9 +494,9 @@ for res in output:
 | 通用版面解析       | `layout_parsing`                | [通用版面解析产线Python脚本使用说明](./docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md#22-python脚本方式集成)                                   |
 | 公式识别       | `formula_recognition`                | [公式识别产线Python脚本使用说明](./docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md#22-python脚本方式集成)                                   |
 | 印章文本识别       | `seal_recognition`                | [印章文本识别产线Python脚本使用说明](./docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md#22-python脚本方式集成)                                   |
-| 时序预测       | `ts_fc`                            | [通用时序预测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md#222-python脚本方式集成)                    |
-| 时序异常检测   | `ts_ad`                            | [通用时序异常检测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md#222-python脚本方式集成)          |
-| 时序分类       | `ts_cls`                           | [通用时序分类产线Python脚本使用说明](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md#222-python脚本方式集成)                 |
+| 时序预测       | `ts_fc`                            | [时序预测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md#222-python脚本方式集成)                    |
+| 时序异常检测   | `ts_ad`                            | [时序异常检测产线Python脚本使用说明](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md#222-python脚本方式集成)          |
+| 时序分类       | `ts_cls`                           | [时序分类产线Python脚本使用说明](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md#222-python脚本方式集成)                 |
 
 </details>
 
@@ -519,7 +519,7 @@ for res in output:
 * <details open>
     <summary> <b> 📝 文本图像智能分析 </b></summary>
 
-   * [📄 文档场景信息抽取v3产线使用教程](./docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md)
+   * [📄 文档场景信息抽取v3产线使用教程](./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md)
   </details>
 
 * <details open>
@@ -639,7 +639,7 @@ for res in output:
 
   * [🚀 PaddleX 高性能推理指南](./docs/pipeline_deploy/high_performance_inference.md)
   * [🖥️ PaddleX 服务化部署指南](./docs/pipeline_deploy/service_deploy.md)
-  * [📱 PaddleX 端侧部署指南](./docs/pipeline_deploy/lite_deploy.md)
+  * [📱 PaddleX 端侧部署指南](./docs/pipeline_deploy/edge_deploy.md)
 
 </details>
 <details open>

+ 7 - 7
README_en.md

@@ -27,7 +27,7 @@ PaddleX 3.0 is a low-code development tool for AI models built on the PaddlePadd
 | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/b302cd7e-e027-4ea6-86d0-8a4dd6d61f39" height="126px" width="180px"> | <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/multilabel_cls.png" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/099e2b00-0bbe-4b20-9c5a-96b69e473bd2" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/09f683b4-27df-4c24-b8a7-84da20fdd182" height="126px" width="180px"> |
 |                                                              [**Semantic Segmentation**](./docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md)                                                               |                                                            [**Image Anomaly Detection**](./docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md)                                                            |                                                          [**OCR**](./docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md)                                                          |                                                          [**Table Recognition**](./docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md)                                                          |
 | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/02637f8c-f248-415b-89ab-1276505f198c" height="126px" width="180px"> | <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/image_anomaly_detection.png" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/1ef48536-48d4-484b-a6fb-0d6631ba2386" height="126px" width="180px"> |  <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/1e798e05-dee7-4b41-9cc4-6708b6014efa" height="126px" width="180px"> |
-|                                                              [**PP-ChatOCRv3-doc**](./docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md)                                                              |                                                            [**Time Series Forecasting**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md)                                                            |                                                              [**Time Series Anomaly Detection**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md)                                                              |                                                         [**Time Series Classification**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md)                                                         |
+|                                                              [**PP-ChatOCRv3-doc**](./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md)                                                              |                                                            [**Time Series Forecasting**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md)                                                            |                                                              [**Time Series Anomaly Detection**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md)                                                              |                                                         [**Time Series Classification**](./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md)                                                         |
 | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/e3d97f4e-ab46-411c-8155-494c61492b0a" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/6e897bf6-35fe-45e6-a040-e9a1a20cfdf2" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/c54c66cc-da4f-4631-877b-43b0fbb192a6" height="126px" width="180px"> | <img src="https://github.com/PaddlePaddle/PaddleX/assets/142379845/0ce925b2-3776-4dde-8ce0-5156d5a2476e" height="126px" width="180px"> |
 
 ## 🌟 Why PaddleX ?
@@ -56,7 +56,7 @@ PaddleX is dedicated to achieving pipeline-level model training, inference, and
 ## 📊 What can PaddleX do?
 
 
-All pipelines of PaddleX support **online experience** and local **fast inference**. You can quickly experience the effects of each pre-trained pipeline. If you are satisfied with the effects of the pre-trained pipeline, you can directly perform [high-performance inference](./docs/pipeline_deploy/high_performance_inference_en.md) / [serving deployment](./docs/pipeline_deploy/service_deploy_en.md) / [edge deployment](./docs/pipeline_deploy/lite_deploy_en.md) on the pipeline. If not satisfied, you can also **Custom Development** to improve the pipeline effect. For the complete pipeline development process, please refer to the [PaddleX pipeline Development Tool Local Use Tutorial](./docs/pipeline_usage/pipeline_develop_guide_en.md).
+All pipelines of PaddleX support **online experience** and local **fast inference**. You can quickly experience the effects of each pre-trained pipeline. If you are satisfied with the effects of the pre-trained pipeline, you can directly perform [high-performance inference](./docs/pipeline_deploy/high_performance_inference_en.md) / [serving deployment](./docs/pipeline_deploy/service_deploy_en.md) / [edge deployment](./docs/pipeline_deploy/edge_deploy_en.md) on the pipeline. If not satisfied, you can also **Custom Development** to improve the pipeline effect. For the complete pipeline development process, please refer to the [PaddleX pipeline Development Tool Local Use Tutorial](./docs/pipeline_usage/pipeline_develop_guide_en.md).
 
 In addition, PaddleX provides developers with a full-process efficient model training and deployment tool based on a [cloud-based GUI](https://aistudio.baidu.com/pipeline/mine). Developers **do not need code development**, just need to prepare a dataset that meets the pipeline requirements to **quickly start model training**. For details, please refer to the tutorial ["Developing Industrial-level AI Models with Zero Barrier"](https://aistudio.baidu.com/practical/introduce/546656605663301).
 
@@ -479,7 +479,7 @@ For other pipelines in Python scripts, just adjust the `pipeline` parameter of t
 
 | pipeline Name           | Corresponding Parameter               | Detailed Explanation                                                                                                      |
 |-------------------------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------|
-| PP-ChatOCRv3-doc   | `PP-ChatOCRv3-doc` | [PP-ChatOCRv3-doc Pipeline Python Script Usage Instructions](./docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md) |
+| PP-ChatOCRv3-doc   | `PP-ChatOCRv3-doc` | [PP-ChatOCRv3-doc Pipeline Python Script Usage Instructions](./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md) |
 |  Image Classification       | `image_classification` | [ Image Classification Pipeline Python Script Usage Instructions](./docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md) |
 |  Object Detection       | `object_detection` | [ Object Detection Pipeline Python Script Usage Instructions](./docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md) |
 |  Instance Segmentation       | `instance_segmentation` | [ Instance Segmentation Pipeline Python Script Usage Instructions](./docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md) |
@@ -514,7 +514,7 @@ For other pipelines in Python scripts, just adjust the `pipeline` parameter of t
 * <details open>
     <summary> <b> 📝 Information Extracion</b></summary>
 
-   * [📄 PP-ChatOCRv3 Pipeline Tutorial](./docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md)
+   * [📄 PP-ChatOCRv3 Pipeline Tutorial](./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md)
   </details>
 
 * <details open>
@@ -612,9 +612,9 @@ For other pipelines in Python scripts, just adjust the `pipeline` parameter of t
 * <details open>
   <summary> <b> ⏱️ Time Series Analysis </b></summary>
 
-  * [📈 Time Series Forecasting Module Tutorial](./docs/module_usage/tutorials/ts_modules/time_series_forecast_en.md)
+  * [📈 Time Series Forecasting Module Tutorial](./docs/module_usage/tutorials/time_series_modules/time_series_forecasting_en.md)
   * [🚨 Time Series Anomaly Detection Module Tutorial](./docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md)
-  * [🕒 Time Series Classification Module Tutorial](./docs/module_usage/tutorials/ts_modules/time_series_classification_en.md)
+  * [🕒 Time Series Classification Module Tutorial](./docs/module_usage/tutorials/time_series_modules/time_series_classification_en.md)
   </details>
 
 * <details open>
@@ -632,7 +632,7 @@ For other pipelines in Python scripts, just adjust the `pipeline` parameter of t
 
   * [🚀 PaddleX High-Performance Inference Guide](./docs/pipeline_deploy/high_performance_inference_en.md)
   * [🖥️ PaddleX Service Deployment Guide](./docs/pipeline_deploy/service_deploy_en.md)
-  * [📱 PaddleX Edge Deployment Guide](./docs/pipeline_deploy/lite_deploy_en.md)
+  * [📱 PaddleX Edge Deployment Guide](./docs/pipeline_deploy/edge_deploy_en.md)
 
 </details>
 <details open>

+ 0 - 0
docs/module_usage/module_develop_guide.md


+ 0 - 1
docs/module_usage/tutorials/cv_modules/doc_text_orientation.md

@@ -1 +0,0 @@
-

+ 0 - 0
docs/module_usage/tutorials/cv_modules/face_features.md


+ 1 - 1
docs/module_usage/tutorials/cv_modules/image_classification_en.md

@@ -1,6 +1,6 @@
 [简体中文](image_classification.md) | English
 
-# Tutorial on Developing Image Classification Modules
+# Image Classification Module Development Tutorial
 
 ## I. Overview
 The image classification module is a crucial component in computer vision systems, responsible for categorizing input images. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. Typically, the image classification module receives an image as input and, through deep learning or other machine learning algorithms, classifies it into predefined categories based on its characteristics and content. For instance, in an animal recognition system, the image classification module might need to classify an input image as "cat," "dog," "horse," etc. The classification results from the image classification module are then output for use by other modules or systems.

+ 0 - 0
docs/module_usage/tutorials/cv_modules/image_correction.md


+ 1 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md

@@ -1,6 +1,6 @@
 [简体中文](mainbody_detection.md) | English
 
-# Tutorial for Developing Mainbody detection Modules
+# Mainbody detection Module Development Tutorial
 
 ## I. Overview
 Mainbody detection is a fundamental task in object detection, aiming to identify and extract the location and size of specific target objects, people, or entities from images and videos. By constructing deep neural network models, mainbody detection learns the feature representations of image subjects to achieve efficient and accurate detection.

+ 1 - 1
docs/module_usage/tutorials/cv_modules/ml_classification_en.md

@@ -1,6 +1,6 @@
 [简体中文](ml_classification.md) | English
 
-# Tutorial on Developing Image Multi-Label Classification Modules
+# Image Multi-Label Classification Module Development Tutorial
 
 ## I. Overview
 The image multi-label classification module is a crucial component in computer vision systems, responsible for assigning multiple labels to input images. Unlike traditional image classification tasks that assign a single category to an image, multi-label classification tasks require assigning multiple relevant categories to an image. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. The image multi-label classification module typically takes an image as input and, through deep learning or other machine learning algorithms, classifies it into multiple predefined categories based on its characteristics and content. For example, an image containing both a cat and a dog might be labeled as both "cat" and "dog" by the image multi-label classification module. These classification labels are then output for subsequent processing and analysis by other modules or systems.

+ 1 - 1
docs/module_usage/tutorials/cv_modules/object_detection_en.md

@@ -1,6 +1,6 @@
 [简体中文](object_detection.md) | English
 
-# Tutorial on Developing Object Detection Modules
+# Object Detection Module Development Tutorial
 
 ## I. Overview
 The object detection module is a crucial component in computer vision systems, responsible for locating and marking regions containing specific objects in images or videos. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. The object detection module typically outputs bounding boxes for the target regions, which are then passed as input to the object recognition module for further processing.

+ 1 - 1
docs/module_usage/tutorials/cv_modules/small_object_detection_en.md

@@ -1,6 +1,6 @@
 [简体中文](small_object_detection.md) | English
 
-# Tutorial for Developing Small Object Detection Modules
+# Small Object Detection Module Development Tutorial
 
 ## I. Overview
 Small object detection typically refers to accurately detecting and locating small-sized target objects in images or videos. These objects often have a small pixel size in images, typically less than 32x32 pixels (as defined by datasets like MS COCO), and may be obscured by the background or other objects, making them difficult to observe directly by the human eye. Small object detection is an important research direction in computer vision, aiming to precisely detect small objects with minimal visual features in images.

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md

@@ -251,7 +251,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 
 1.**产线集成**
 
-文档图像分类模块可以集成的PaddleX产线有[文档场景信息抽取产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成文本检测模块的模型更新。
+文档图像分类模块可以集成的PaddleX产线有[文档场景信息抽取v3产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成文本检测模块的模型更新。
 
 2.**模块集成**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md

@@ -270,7 +270,7 @@ The model can be directly integrated into the PaddleX pipeline or into your own
 
 1.**Pipeline Integration**
 
-The document image classification module can be integrated into PaddleX pipelines such as the [Document Scene Information Extraction Pipeline (PP-ChatOCRv3)](../../..//pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the The document image classification module's model.
+The document image classification module can be integrated into PaddleX pipelines such as the [Document Scene Information Extraction Pipeline (PP-ChatOCRv3)](../../..//pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the The document image classification module's model.
 
 2.**Module Integration**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/layout_detection.md

@@ -248,7 +248,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 模型可以直接集成到PaddleX产线中,也可以直接集成到您自己的项目中。
 
 1. **产线集成**
-版面区域检测模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成版面区域检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+版面区域检测模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取v3产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成版面区域检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 1. **模块集成**
 您产出的权重可以直接集成到版面区域检测模块中,可以参考[快速集成](#三快速集成)的 Python 示例代码,只需要将模型替换为你训练的到的模型路径即可。

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/layout_detection_en.md

@@ -249,7 +249,7 @@ Other related parameters can be set by modifying the fields under `Global` and `
 The model can be directly integrated into PaddleX pipelines or into your own projects.
 
 1. **Pipeline Integration**
-The structure analysis module can be integrated into PaddleX pipelines such as the [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md) and the [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../..//pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the layout area localization module. In pipeline integration, you can use high-performance inference and service-oriented deployment to deploy your model.
+The structure analysis module can be integrated into PaddleX pipelines such as the [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md) and the [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../..//pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the layout area localization module. In pipeline integration, you can use high-performance inference and service-oriented deployment to deploy your model.
 
 1. **Module Integration**
 The weights you produce can be directly integrated into the layout area localization module. You can refer to the Python example code in the [Quick Integration](#quick) section, simply replacing the model with the path to your trained model.

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/seal_text_detection.md

@@ -254,7 +254,7 @@ python main.py -c paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.y
 
 1.**产线集成**
 
-印章文本检测模块可以集成的PaddleX产线有[文档场景信息抽取产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成印章文本检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+印章文本检测模块可以集成的PaddleX产线有[文档场景信息抽取v3产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成印章文本检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 2.**模块集成**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/seal_text_detection_en.md

@@ -275,7 +275,7 @@ The model can be directly integrated into the PaddleX pipeline or into your own
 
 1. **Pipeline Integration**
 
-The document Seal Text Detection module can be integrated into PaddleX pipelines such as the [General OCR Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md) and [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the text detection module of the relevant pipeline.
+The document Seal Text Detection module can be integrated into PaddleX pipelines such as the [General OCR Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md) and [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the text detection module of the relevant pipeline.
 
 2. **Module Integration**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md

@@ -267,7 +267,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml  \
 
 1.**产线集成**
 
-表格结构识别模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的表格结构识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+表格结构识别模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取v3产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的表格结构识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 
 2.**模块集成**

+ 2 - 2
docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md

@@ -1,6 +1,6 @@
 [简体中文](table_structure_recognition.md) | English
 
-# Tutorial for Developing Table Structure Recognition Modules
+# Table Structure Recognition Module Development Tutorial
 
 ## I. Overview
 Table structure recognition is a crucial component in table recognition systems, converting non-editable table images into editable table formats (e.g., HTML). The goal of table structure recognition is to identify the rows, columns, and cell positions of tables. The performance of this module directly impacts the accuracy and efficiency of the entire table recognition system. The module typically outputs HTML or LaTeX code for the table area, which is then passed to the table content recognition module for further processing.
@@ -268,7 +268,7 @@ The model can be directly integrated into the PaddleX pipeline or directly into
 
 1.**Pipeline Integration**
 
-The table structure recognition module can be integrated into PaddleX pipelines such as the [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md) and the [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the table structure recognition module in the relevant pipelines. For pipeline integration, you can deploy your obtained model using high-performance inference and service-oriented deployment.
+The table structure recognition module can be integrated into PaddleX pipelines such as the [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md) and the [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the table structure recognition module in the relevant pipelines. For pipeline integration, you can deploy your obtained model using high-performance inference and service-oriented deployment.
 
 2.**Module Integration**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/text_detection.md

@@ -235,7 +235,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 
 1.**产线集成**
 
-文本检测模块可以集成的 PaddleX 产线有[通用 OCR 产线](../../../pipeline_usage/tutorials/ocr_pipelines/OCR.md)、[表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的文本检测模块的模型更新。
+文本检测模块可以集成的 PaddleX 产线有[通用 OCR 产线](../../../pipeline_usage/tutorials/ocr_pipelines/OCR.md)、[表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取v3产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的文本检测模块的模型更新。
 
 2.**模块集成**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/text_detection_en.md

@@ -199,7 +199,7 @@ Models can be directly integrated into PaddleX pipelines or into your own projec
 
 1.**Pipeline Integration**
 
-The text detection module can be integrated into PaddleX pipelines such as the [General OCR Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md), [Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md), and [PP-ChatOCRv3-doc](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the text detection module of the relevant pipeline.
+The text detection module can be integrated into PaddleX pipelines such as the [General OCR Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md), [Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md), and [PP-ChatOCRv3-doc](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the text detection module of the relevant pipeline.
 
 2.**Module Integration**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/text_recognition.md

@@ -299,7 +299,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 
 1.**产线集成**
 
-文本识别模块可以集成的PaddleX产线有[通用 OCR 产线](../../../pipeline_usage/tutorials/ocr_pipelines/OCR.md)、[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的文本识别模块的模型更新。
+文本识别模块可以集成的PaddleX产线有[通用 OCR 产线](../../../pipeline_usage/tutorials/ocr_pipelines/OCR.md)、[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取v3产线(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的文本识别模块的模型更新。
 
 2.**模块集成**
 

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/text_recognition_en.md

@@ -304,7 +304,7 @@ Models can be directly integrated into the PaddleX pipelines or into your own pr
 
 1.**Pipeline Integration**
 
-The text recognition module can be integrated into PaddleX pipelines such as the [General OCR Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md), [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md), and [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the text recognition module of the relevant pipeline.
+The text recognition module can be integrated into PaddleX pipelines such as the [General OCR Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/OCR_en.md), [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md), and [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the text recognition module of the relevant pipeline.
 
 2.**Module Integration**
 

+ 2 - 0
docs/module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md → docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md

@@ -1,3 +1,5 @@
+[简体中文](time_series_anomaly_detection.md) | English
+
 # Time Series Anomaly Detection Module Development Tutorial
 
 ## I. Overview

+ 2 - 0
docs/module_usage/tutorials/ts_modules/time_series_classification_en.md → docs/module_usage/tutorials/time_series_modules/time_series_classification_en.md

@@ -1,3 +1,5 @@
+[简体中文](time_series_classification.md) | English
+
 # Time Series Classification Module Development Tutorial
 
 ## I. Overview

+ 1 - 1
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md

@@ -1,4 +1,4 @@
-简体中文 | [English](time_series_forecast_en.md)
+简体中文 | [English](time_series_forecasting_en.md)
 
 # 时序预测模块使用教程
 

+ 3 - 1
docs/module_usage/tutorials/ts_modules/time_series_forecast_en.md → docs/module_usage/tutorials/time_series_modules/time_series_forecasting_en.md

@@ -1,3 +1,5 @@
+[简体中文](time_series_forecasting.md) | English
+
 # Time Series Forecasting Module Development Tutorial
 
 ## I. Overview
@@ -40,7 +42,7 @@ For more information on using PaddleX's single-model inference API, refer to the
 
 ## IV. Custom Development
 
-If you seek higher accuracy, you can leverage PaddleX's custom development capabilities to develop better Time Series Forecasting models. Before developing a Time Series Forecasting model with PaddleX, ensure you have installed PaddleClas plugin for PaddleX. The installation process can be found in the custom development section of the [PaddleX Local Installation Tutorial](../../installation/installation_en.md).
+If you seek higher accuracy, you can leverage PaddleX's custom development capabilities to develop better Time Series Forecasting models. Before developing a Time Series Forecasting model with PaddleX, ensure you have installed PaddleClas plugin for PaddleX. The installation process can be found in the custom development section of the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md).
 
 ### 4.1 Dataset Preparation
 

+ 8 - 7
docs/pipeline_deploy/lite_deploy.md → docs/pipeline_deploy/edge_deploy.md

@@ -1,13 +1,14 @@
-简体中文 | [English](lite_deploy_en.md)
+简体中文 | [English](edge_deploy_en.md)
 
 # PaddleX 端侧部署 demo 使用指南
 
-- [安装流程与使用方式](#安装流程与使用方式)
-  - [环境准备](#环境准备)
-  - [物料准备](#物料准备)
-  - [部署步骤](#部署步骤)
-- [参考资料](#参考资料)
-- [反馈专区](#反馈专区)
+- [PaddleX 端侧部署 demo 使用指南](#paddlex-端侧部署-demo-使用指南)
+  - [安装流程与使用方式](#安装流程与使用方式)
+    - [环境准备](#环境准备)
+    - [物料准备](#物料准备)
+    - [部署步骤](#部署步骤)
+  - [参考资料](#参考资料)
+  - [反馈专区](#反馈专区)
 
 本指南主要介绍 PaddleX 端侧部署 demo 在 Android shell 上的运行方法。
 本指南适用于下列 6 种模块的 8 个模型:

+ 7 - 6
docs/pipeline_deploy/lite_deploy_en.md → docs/pipeline_deploy/edge_deploy_en.md

@@ -2,12 +2,13 @@
 
 # PaddleX Edge Deployment Demo Usage Guide
 
-- [Installation Process and Usage](#installation-process-and-usage)
-  - [Environment Preparation](#environment-preparation)
-  - [Material Preparation](#material-preparation)
-  - [Deployment Steps](#deployment-steps)
-- [Reference Materials](#reference-materials)
-- [Feedback Section](#feedback-section)
+- [PaddleX Edge Deployment Demo Usage Guide](#paddlex-edge-deployment-demo-usage-guide)
+  - [Installation Process and Usage](#installation-process-and-usage)
+    - [Environment Preparation](#environment-preparation)
+    - [Material Preparation](#material-preparation)
+    - [Deployment Steps](#deployment-steps)
+  - [Reference Materials](#reference-materials)
+  - [Feedback Section](#feedback-section)
 
 This guide mainly introduces the operation method of the PaddleX edge deployment demo on the Android shell.
 This guide applies to 8 models across 6 modules:

+ 2 - 2
docs/pipeline_deploy/service_deploy.md

@@ -69,7 +69,7 @@ INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
 | 通用目标检测产线       | [通用目标检测产线使用教程](../pipeline_usage/tutorials/cv_pipelines/object_detection.md)       |
 | 通用语义分割产线       | [通用语义分割产线使用教程](../pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md)       |
 | 通用实例分割产线       | [通用实例分割产线使用教程](../pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md)       |
-| 通用图像多标签分类产线 | [通用图像多标签分类产线使用教程](../pipeline_usage/tutorials/cv_pipelines/image_multi_label_lassification.md) |
+| 通用图像多标签分类产线 | [通用图像多标签分类产线使用教程](../pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md) |
 | 小目标检测产线         | [小目标检测产线使用教程](../pipeline_usage/tutorials/cv_pipelines/small_object_detection.md)         |
 | 图像异常检测产线       | [图像异常检测产线使用教程](../pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md)       |
 | 通用OCR产线            | [通用OCR产线使用教程](../pipeline_usage/tutorials/ocr_pipelines/OCR.md)            |
@@ -80,7 +80,7 @@ INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
 | 时序预测产线           | [时序预测产线使用教程](../pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md)           |
 | 时序异常检测产线       | [时序异常检测产线使用教程](../pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md)       |
 | 时序分类产线           | [时序分类产线使用教程](../pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md)           |
-| 文档场景信息抽取v3产线 | [文档场景信息抽取v3产线使用教程](../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md) |
+| 文档场景信息抽取v3产线 | [文档场景信息抽取v3产线使用教程](../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md) |
 
 ## 2、将服务用于生产
 

+ 1 - 1
docs/pipeline_deploy/service_deploy_en.md

@@ -81,7 +81,7 @@ Please refer to the **"Development Integration/Deployment"** section in the usag
 | Time Series Forecasting Pipeline | [Tutorial for Using the Time Series Forecasting Pipeline](../pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md) |
 | Time Series Anomaly Detection Pipeline | [Tutorial for Using the Time Series Anomaly Detection Pipeline](../pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md) |
 | Time Series Classification Pipeline | [Tutorial for Using the Time Series Classification Pipeline](../pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md) |
-| Document Scene Information Extraction v3 Pipeline | [Tutorial for Using the Document Scene Information Extraction v3 Pipeline](../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md) |
+| Document Scene Information Extraction v3 Pipeline | [Tutorial for Using the Document Scene Information Extraction v3 Pipeline](../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md) |
 
 ## 2. Deploy Services for Production
 

+ 3 - 3
docs/pipeline_usage/pipeline_develop_guide.md

@@ -180,11 +180,11 @@ Pipeline:
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
 
-🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../pipeline_deploy/high_performance_inference.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../pipeline_deploy/service_deploy.md)。
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 
@@ -193,7 +193,7 @@ Pipeline:
 
 | 产线名称           | 详细说明                                                                                                      |
 |--------------------|----------------------------------------------------------------------------------------------------------------|
-| 文档场景信息抽取v3   | [文档场景信息抽取v3产线使用教程](./tutorials/information_extration_pipelines/document_scene_information_extraction.md) |
+| 文档场景信息抽取v3   | [文档场景信息抽取v3产线使用教程](./tutorials/information_extraction_pipelines/document_scene_information_extraction.md) |
 | 通用图像分类       | [通用图像分类产线使用教程](./tutorials/cv_pipelines/image_classification.md) |
 | 通用目标检测       | [通用目标检测产线使用教程](./tutorials/cv_pipelines/object_detection.md) |
 | 通用实例分割       | [通用实例分割产线使用教程](./tutorials/cv_pipelines/instance_segmentation.md) |

+ 2 - 2
docs/pipeline_usage/pipeline_develop_guide_en.md

@@ -180,7 +180,7 @@ In addition, PaddleX also provides three other deployment methods, with detailed
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. Refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md) for detailed service-oriented deployment procedures.
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md) for detailed edge deployment procedures.
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md) for detailed edge deployment procedures.
 
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
@@ -189,7 +189,7 @@ Choose the appropriate deployment method for your model pipeline based on your n
 
 | Pipeline Name          | Detailed Description                                                                                                      |
 |------------------------|---------------------------------------------------------------------------------------------------------------------------|
-| PP-ChatOCR-doc v3   | [PP-ChatOCR-doc v3 Pipeline Usage Tutorial](./tutorials/information_extration_pipelines/document_scene_information_extraction_en.md) |
+| PP-ChatOCR-doc v3   | [PP-ChatOCR-doc v3 Pipeline Usage Tutorial](./tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md) |
 | Image Classification       | [Image Classification Pipeline Usage Tutorial](./tutorials/cv_pipelines/image_classification_en.md) |
 | Object Detection       | [Object Detection Pipeline Usage Tutorial](./tutorials/cv_pipelines/object_detection_en.md) |
 | Instance Segmentation       | [Instance Segmentation Pipeline Usage Tutorial](./tutorials/cv_pipelines/instance_segmentation_en.md) |

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md

@@ -583,7 +583,7 @@ echo "Output image saved at " . $output_image_path . "\n";
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md

@@ -556,7 +556,7 @@ echo "Output image saved at " . $output_image_path . "\n";
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, enabling devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, enabling devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md

@@ -1205,7 +1205,7 @@ print_r($result["categories"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 2 - 2
docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md

@@ -1,6 +1,6 @@
 [简体中文](image_classification.md) | English
 
-# General Image Classification Pipeline Usage Tutorial
+# General Image Classification Pipeline Tutorial
 
 ## 1. Introduction to the General Image Classification Pipeline
 Image classification is a technique that assigns images to predefined categories. It is widely applied in object recognition, scene understanding, and automatic annotation. Image classification can identify various objects such as animals, plants, traffic signs, and categorize them based on their features. By leveraging deep learning models, image classification can automatically extract image features and perform accurate classification.
@@ -1186,7 +1186,7 @@ print_r($result["categories"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md

@@ -614,7 +614,7 @@ print_r($result["categories"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 2 - 2
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md

@@ -592,7 +592,7 @@ print_r($result["categories"]);
 
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a way to place computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a way to place computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
 ## 4. Custom Development
@@ -630,4 +630,4 @@ At this point, if you wish to switch the hardware to Ascend NPU, simply modify t
 ```bash
 paddlex --pipeline multi_label_image_classification --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg --device npu:0
 ```
-If you want to use the General Image Multi-label Classification Pipeline on more diverse hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../installation/multi_devices_use_guide_en.md).
+If you want to use the General Image Multi-label Classification Pipeline on more diverse hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md

@@ -644,7 +644,7 @@ print_r($result["instances"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md

@@ -630,7 +630,7 @@ print_r($result["instances"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on the user's device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on the user's device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
 ## 4. CCustom Development

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md

@@ -933,7 +933,7 @@ print_r($result["detectedObjects"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md

@@ -913,7 +913,7 @@ print_r($result["detectedObjects"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md) for detailed edge deployment procedures.
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md) for detailed edge deployment procedures.
 
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md

@@ -614,7 +614,7 @@ echo "Output image saved at " . $output_image_path . "\n";
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md

@@ -592,7 +592,7 @@ echo "Output image saved at " . $output_image_path . "\n";
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md

@@ -622,7 +622,7 @@ print_r($result["detectedObjects"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md

@@ -604,7 +604,7 @@ print_r($result["detectedObjects"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md → docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md

@@ -650,7 +650,7 @@ print(result_chat["chatResult"])
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 2 - 2
docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md → docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md

@@ -656,7 +656,7 @@ print(result_chat["chatResult"])
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 
 ## 4. Custom Development
 
@@ -717,5 +717,5 @@ pipeline = create_pipeline(
     )
 ```
 
-If you want to use the PP-ChatOCRv3-doc Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../installation/multi_devices_use_guide_en.md).
+If you want to use the PP-ChatOCRv3-doc Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md

@@ -735,7 +735,7 @@ print_r($result["texts"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 

+ 2 - 2
docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md

@@ -1,6 +1,6 @@
 [简体中文](OCR.md) | English
 
-# General OCR Pipeline Usage Tutorial
+# General OCR Pipeline Tutorial
 
 ## 1. Introduction to OCR Pipeline
 OCR (Optical Character Recognition) is a technology that converts text in images into editable text. It is widely used in document digitization, information extraction, and data processing. OCR can recognize printed text, handwritten text, and even certain types of fonts and symbols.
@@ -719,7 +719,7 @@ print_r($result["texts"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 You can choose the appropriate deployment method based on your needs to proceed with subsequent AI application integration.
 
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md

@@ -660,7 +660,7 @@ print_r($result["formulas"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md

@@ -639,7 +639,7 @@ print_r($result["formulas"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 You can choose the appropriate deployment method based on your needs to proceed with subsequent AI application integration.
 
 

+ 2 - 2
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md

@@ -322,7 +322,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_inference.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -443,7 +443,7 @@ for res in result["layoutParsingResults"]:
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 3 - 3
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md

@@ -286,7 +286,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_inference.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -754,7 +754,7 @@ print_r($result["sealImpressions"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发
@@ -801,4 +801,4 @@ paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --s
 ```
 paddlex --pipeline seal_recognition --input seal_text_det.png --device npu:0 --save_path output
 ```
-若您想在更多种类的硬件上使用印章文本识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用印章文本识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 22 - 22
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition_en.md

@@ -1,13 +1,13 @@
 [简体中文](seal_recognition.md) | English
 
-# Tutorial for Using Seal Text Recognition Pipeline
-
-## 1. Introduction to the Seal Text Recognition Pipeline
-Seal text recognition is a technology that automatically extracts and recognizes seal content from documents or images. The recognition of seal text is part of document processing and has various applications in many scenarios, such as contract comparison, inventory access approval, and invoice reimbursement approval.
+# Seal Recognition Pipeline Tutorial
+ 
+## 1. Introduction to the Seal Recognition Pipeline
+Seal recognition is a technology that automatically extracts and recognizes seal content from documents or images. The recognition of seal is part of document processing and has various applications in many scenarios, such as contract comparison, inventory access approval, and invoice reimbursement approval.
 
 ![](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_seal/01.png)
 
-The **Seal Text Recognition** pipeline includes a layout area analysis module, a seal text detection module, and a text recognition module.
+The **Seal Recognition** pipeline includes a layout area analysis module, a seal detection module, and a text recognition module.
 
 **If you prioritize model accuracy, please choose a model with higher accuracy. If you prioritize inference speed, please choose a model with faster inference. If you prioritize model storage size, please choose a model with a smaller storage footprint.**
 
@@ -26,12 +26,12 @@ The **Seal Text Recognition** pipeline includes a layout area analysis module, a
 **Note: The evaluation set for the above accuracy indicators is a self-built layout area analysis dataset from PaddleX, containing 10,000 images. The GPU inference time for all models above is based on an NVIDIA Tesla T4 machine with a precision type of FP32. The CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads, and the precision type is also FP32.**
 
 
-**Seal Text Detection Module Models**:
+**Seal Detection Module Models**:
 
 | Model | Detection Hmean (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
 |-------|---------------------|-------------------------|-------------------------|--------------|-------------|
-| PP-OCRv4_server_seal_det | 98.21 | 84.341 | 2425.06 | 109 | PP-OCRv4's server-side seal text detection model, featuring higher accuracy, suitable for deployment on better-equipped servers |
-| PP-OCRv4_mobile_seal_det | 96.47 | 10.5878 | 131.813 | 4.6 | PP-OCRv4's mobile seal text detection model, offering higher efficiency, suitable for deployment on edge devices |
+| PP-OCRv4_server_seal_det | 98.21 | 84.341 | 2425.06 | 109 | PP-OCRv4's server-side seal detection model, featuring higher accuracy, suitable for deployment on better-equipped servers |
+| PP-OCRv4_mobile_seal_det | 96.47 | 10.5878 | 131.813 | 4.6 | PP-OCRv4's mobile seal detection model, offering higher efficiency, suitable for deployment on edge devices |
 
 **Note: The above accuracy metrics are evaluated on a self-built dataset containing 500 circular seal images. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
@@ -48,20 +48,20 @@ The **Seal Text Recognition** pipeline includes a layout area analysis module, a
 </details>
 
 ## 2.  Quick Start
-The pre trained model production line provided by PaddleX can quickly experience the effect. You can experience the effect of the seal text recognition production line online, or use the command line or Python locally to experience the effect of the seal text recognition production line.
+The pre trained model production line provided by PaddleX can quickly experience the effect. You can experience the effect of the seal recognition production line online, or use the command line or Python locally to experience the effect of the seal recognition production line.
 
 ### 2.1 Online Experience
-You can [experience online](https://aistudio.baidu.com/community/app/182491/webUI) the effect of seal text recognition in the v3 production line for extracting document scene information, using official demo images for recognition, for example:
+You can [experience online](https://aistudio.baidu.com/community/app/182491/webUI) the effect of seal recognition in the v3 production line for extracting document scene information, using official demo images for recognition, for example:
 
 ! []( https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/seal_recognition/02.png )
 
 If you are satisfied with the performance of the production line, you can directly integrate and deploy the production line. If you are not satisfied, you can also use private data to fine tune the models in the production line online.
 
 ### 2.2 Local Experience
-Before using the seal text recognition production line locally, please ensure that you have completed the wheel package installation of PaddleX according to the  [PaddleX Local Installation Guide](../../../installation/installation_en.md).
+Before using the seal recognition production line locally, please ensure that you have completed the wheel package installation of PaddleX according to the  [PaddleX Local Installation Guide](../../../installation/installation_en.md).
 
 ### 2.3 Command line experience
-One command can quickly experience the effect of seal text recognition production line, use [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png), and replace ` --input ` with the local path for prediction
+One command can quickly experience the effect of seal recognition production line, use [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png), and replace ` --input ` with the local path for prediction
 
 ```
 paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --save_path output
@@ -70,12 +70,12 @@ paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --s
 Parameter description:
 
 ```
---Pipeline: Production line name, here is the seal text recognition production line
+--Pipeline: Production line name, here is the seal recognition production line
 --Input: The local path or URL of the input image to be processed
 --The GPU serial number used by the device (e.g. GPU: 0 indicates the use of the 0th GPU, GPU: 1,2 indicates the use of the 1st and 2nd GPUs), or the CPU (-- device CPU) can be selected for use
 ```
 
-When executing the above Python script, the default seal text recognition production line configuration file is loaded. If you need to customize the configuration file, you can execute the following command to obtain it:
+When executing the above Python script, the default seal recognition production line configuration file is loaded. If you need to customize the configuration file, you can execute the following command to obtain it:
 
 <details>
 <summary>  👉 Click to expand</summary>
@@ -84,7 +84,7 @@ When executing the above Python script, the default seal text recognition produc
 paddlex --get_pipeline_config seal_recognition
 ```
 
-After execution, the seal text recognition production line configuration file will be saved in the current path. If you want to customize the save location, you can execute the following command (assuming the custom save location is `./my_path `):
+After execution, the seal recognition production line configuration file will be saved in the current path. If you want to customize the save location, you can execute the following command (assuming the custom save location is `./my_path `):
 
 ```bash
 paddlex --get_pipeline_config seal_recognition --save_path ./my_path --save_path output
@@ -221,7 +221,7 @@ The visualized image not saved by default. You can customize the save path throu
 
 
 ###  2.2 Python Script Integration
-A few lines of code can complete the fast inference of the production line. Taking the seal text recognition production line as an example:
+A few lines of code can complete the fast inference of the production line. Taking the seal recognition production line as an example:
 
 ```python
 from paddlex import create_pipeline
@@ -320,7 +320,7 @@ Operations provided by the service:
 
 - **`infer`**
 
-    Obtain seal text recognition results from an image.
+    Obtain seal recognition results from an image.
 
     `POST /seal-recognition`
 
@@ -341,7 +341,7 @@ Operations provided by the service:
 
         | Name | Type | Description |
         |------|------|-------------|
-        |`sealImpressions`|`array`|Seal text recognition results.|
+        |`sealImpressions`|`array`|Seal recognition results.|
         |`layoutImage`|`string`|Layout area detection result image. The image is in JPEG format and encoded using Base64.|
 
         Each element in `sealImpressions` is an `object` with the following properties:
@@ -730,10 +730,10 @@ print_r($result["sealImpressions"]);
 <br/>
 
 ## 4.  Secondary development
-If the default model weights provided by the seal text recognition production line are not satisfactory in terms of accuracy or speed in your scenario, you can try using your own specific domain or application scenario data to further fine tune the existing model to improve the recognition performance of the seal text recognition production line in your scenario.
+If the default model weights provided by the seal recognition production line are not satisfactory in terms of accuracy or speed in your scenario, you can try using your own specific domain or application scenario data to further fine tune the existing model to improve the recognition performance of the seal recognition production line in your scenario.
 
 ### 4.1 Model fine-tuning
-Due to the fact that the seal text recognition production line consists of three modules, the performance of the model production line may not be as expected due to any of these modules.
+Due to the fact that the seal recognition production line consists of three modules, the performance of the model production line may not be as expected due to any of these modules.
 
 You can analyze images with poor recognition performance and refer to the following rules for analysis and model fine-tuning:
 
@@ -764,7 +764,7 @@ Subsequently, refer to the command line or Python script in the local experience
 ##  5.  Multiple hardware support
 PaddleX supports various mainstream hardware devices such as Nvidia GPU, Kunlun Core XPU, Ascend NPU, and Cambrian MLU, and can seamlessly switch between different hardware devices by simply modifying the **`--device`** parameter.
 
-For example, if you use Nvidia GPU for inference on a seal text recognition production line, the Python command you use is:
+For example, if you use Nvidia GPU for inference on a seal recognition production line, the Python command you use is:
 
 ```
 paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --save_path output
@@ -776,4 +776,4 @@ At this point, if you want to switch the hardware to Ascend NPU, simply modify t
 paddlex --pipeline seal_recognition --input seal_text_det.png --device npu:0 --save_path output
 ```
 
-If you want to use the seal text recognition production line on a wider range of hardware, please refer to the [PaddleX Multi Hardware Usage Guide](../../../other_devices_support/installation_other_devices_en.md)。
+If you want to use the seal recognition production line on a wider range of hardware, please refer to the [PaddleX Multi Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md)。

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md

@@ -785,7 +785,7 @@ print_r($result["tables"]);
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 2 - 2
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md

@@ -1,6 +1,6 @@
 [简体中文](table_recognition_en.md) | English
 
-# General Table Recognition Pipeline Usage Tutorial
+# General Table Recognition Pipeline Tutorial
 
 ## 1. Introduction to the General Table Recognition Pipeline
 Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in data entry, information retrieval, and document analysis. By leveraging computer vision and machine learning algorithms, table recognition can convert complex table information into editable formats, facilitating further data processing and analysis for users.
@@ -698,7 +698,7 @@ print_r($result["tables"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md

@@ -597,7 +597,7 @@ echo "Output time-series data saved at " . $output_csv_path . "\n";
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md

@@ -568,7 +568,7 @@ echo "Output time-series data saved at " . $output_csv_path . "\n";
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md

@@ -533,7 +533,7 @@ echo "label: " . $result["label"] . ", score: " . $result["score"];
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md

@@ -518,7 +518,7 @@ echo "label: " . $result["label"] . ", score: " . $result["score"];
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md) for detailed edge deployment procedures.
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md) for detailed edge deployment procedures.
 Choose the appropriate deployment method based on your needs to proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md

@@ -596,7 +596,7 @@ echo "Output time-series data saved at " . $output_csv_path . "\n";
 </details>
 <br/>
 
-📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 
 ## 4. 二次开发

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md

@@ -570,7 +570,7 @@ echo "Output time-series data saved at " . $output_csv_path . "\n";
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, enabling devices to directly process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, enabling devices to directly process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
 ## 4. Custom Development

+ 4 - 4
docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial.md

@@ -21,7 +21,7 @@ PaddleX 提供了两种体验的方式,你可以在线体验文档场景信息
 
 ### 2.1 本地体验
 
-在本地使用文档场景信息抽取v3产线前,请确保您已经按照[PaddleX本地安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。几行代码即可完成产线的快速推理:
+在本地使用文档场景信息抽取v3产线前,请确保您已经按照[PaddleX本地安装教程](../installation/installation.md)完成了PaddleX的wheel包安装。几行代码即可完成产线的快速推理:
 
 
 ```python
@@ -422,13 +422,13 @@ chat_result = pipeline.chat(
 chat_result.print()
 ```
 
-更多参数请参考 [文档场景信息抽取v3产线使用教程](../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md)。
+更多参数请参考 [文档场景信息抽取v3产线使用教程](../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md)。
 
 2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-* 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
+* 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
 

+ 3 - 3
docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial_en.md

@@ -18,7 +18,7 @@ PaddleX offers two ways to experience its capabilities. You can try out the Docu
 
 ### 2.1 Local Experience
 
-Before using the Document Scene Information Extraction v3 pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md). With just a few lines of code, you can quickly perform inference using the pipeline:
+Before using the Document Scene Information Extraction v3 pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Local Installation Tutorial](../../docs/installation/installation_en.md). With just a few lines of code, you can quickly perform inference using the pipeline:
 
 
 ```python
@@ -420,12 +420,12 @@ chat_result = pipeline.chat(
 chat_result.print()
 ```
 
-For more parameters, please refer to the [Document Scene Information Extraction Pipeline Usage Tutorial](../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md).
+For more parameters, please refer to the [Document Scene Information Extraction Pipeline Usage Tutorial](../pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md).
 
 2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugin aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/image_classification_garbage_tutorial.md

@@ -253,6 +253,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/image_classification_garbage_tutorial_en.md

@@ -256,6 +256,6 @@ For more parameters, please refer to the [General Image Classification Pipeline
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugin aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.md

@@ -251,6 +251,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial_en.md

@@ -256,6 +256,6 @@ For more parameters, please refer to the [General Instance Segmentation Pipline
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/object_detection_fall_tutorial.md

@@ -252,6 +252,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/object_detection_fall_tutorial_en.md

@@ -254,6 +254,6 @@ For more parameters, please refer to [General Object Detection Pipeline Usage Tu
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/object_detection_fashion_pedia_tutorial.md

@@ -253,6 +253,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/object_detection_fashion_pedia_tutorial_en.md

@@ -255,6 +255,6 @@ For more parameters, please refer to [General Object Detection Pipeline Usage Tu
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/ocr_det_license_tutorial.md

@@ -256,6 +256,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/ocr_det_license_tutorial_en.md

@@ -258,6 +258,6 @@ For more parameters, please refer to the [General OCR Pipeline Usage Tutorial](.
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/ocr_rec_chinese_tutorial.md

@@ -258,6 +258,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/ocr_rec_chinese_tutorial_en.md

@@ -261,6 +261,6 @@ For more parameters, please refer to the [General OCR Pipeline Usage Tutorial](.
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 1 - 1
docs/practical_tutorials/semantic_segmentation_road_tutorial.md

@@ -249,6 +249,6 @@ for res in output:
 
 * 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能推理指南](../pipeline_deploy/high_performance_inference.md)。
 * 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
-* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/edge_deploy.md)。
 
 您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。

+ 1 - 1
docs/practical_tutorials/semantic_segmentation_road_tutorial_en.md

@@ -252,6 +252,6 @@ For more parameters, please refer to [General Semantic Segmentation Pipeline Usa
 
 * high-performance inference: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../pipeline_deploy/high_performance_inference_en.md).
 * Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving cost-effective service-oriented deployment of production lines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../pipeline_deploy/service_deploy_en.md).
-* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/lite_deploy_en.md).
+* Edge Deployment: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../pipeline_deploy/edge_deploy_en.md).
 
 You can select the appropriate deployment method for your model pipeline according to your needs, and proceed with subsequent AI application integration.

+ 3 - 3
docs/support_list/models_list_en.md

@@ -343,7 +343,7 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
 
 **Note: The evaluation set for the above accuracy metrics is the ****PaddleX self-built Layout Detection Dataset****, containing 10,000 images.**
 
-## [Time Series Forecasting Module](../module_usage/tutorials/ts_modules/time_series_forecast_en.md)
+## [Time Series Forecasting Module](../module_usage/tutorials/time_series_modules/time_series_forecast_en.md)
 |Model Name|mse|mae|Model Size|YAML File|
 |-|-|-|-|-|
 |DLinear|0.382|0.394|72 K|[DLinear.yaml](../../paddlex/configs/ts_forecast/DLinear.yaml)|
@@ -356,7 +356,7 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
 
 **Note: The above accuracy metrics are measured on the **[ETTH1](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/Etth1.tar)** dataset ****(evaluation results on the test set test.csv)****.**
 
-## [Time Series Anomaly Detection Module](../module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md)
+## [Time Series Anomaly Detection Module](../module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md)
 |Model Name|Precision|Recall|f1_score|Model Size|YAML File|
 |-|-|-|-|-|-|
 |AutoEncoder_ad|99.36|84.36|91.25|52 K |[AutoEncoder_ad.yaml](../../paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml)|
@@ -367,7 +367,7 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
 
 **Note: The above accuracy metrics are measured on the **[PSM](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ts_anomaly_examples.tar)** dataset.**
 
-## [Time Series Classification Module](../module_usage/tutorials/ts_modules/time_series_classification_en.md)
+## [Time Series Classification Module](../module_usage/tutorials/time_series_modules/time_series_classification_en.md)
 |Model Name|acc (%)|Model Size|YAML File|
 |-|-|-|-|
 |TimesNet_cls|87.5|792 K|[TimesNet_cls.yaml](../../paddlex/configs/ts_classification/TimesNet_cls.yaml)|

+ 81 - 9
docs/support_list/pipelines_list.md

@@ -66,6 +66,37 @@
     </td>
   </tr>
   <tr>
+    <td rowspan = 7>文档场景信息抽取v3</td>
+    <td>表格结构识别</td>
+    <td rowspan = 7><a href="https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter">在线体验</a></td>
+    <td rowspan = 7>文档图像场景信息抽取v3(PP-ChatOCRv3-doc)是飞桨特色的文档和图像智能分析解决方案,结合了 LLM 和 OCR 技术,一站式解决版面分析、生僻字、多页 pdf、表格、印章识别等常见的复杂文档信息抽取难点问题,结合文心大模型将海量数据和知识相融合,准确率高且应用广泛。开源版支持本地体验和本地部署,支持各个模块的微调训练。</td>
+    <td rowspan="7">
+  <ul>
+    <li>知识图谱的构建</li>
+    <li>在线新闻和社交媒体中特定事件相关信息的检测</li>
+    <li>学术文献中关键信息的抽取和分析(特别是需要对印章、扭曲图片、更复杂表格进行识别的场景)</li>
+  </ul>
+</td>
+  </tr>
+  <tr>
+    <td>版面区域检测</td>
+  </tr>
+  <tr>
+    <td>文本检测</td>
+  </tr>
+  <tr>
+    <td>文本识别</td>
+  </tr>
+  <tr>
+    <td>印章文本检测</td>
+  </tr>
+  <tr>
+    <td>文本图像矫正</td>
+  </tr>
+  <tr>
+    <td>文档图像方向分类</td>
+  </tr>
+  <tr>
     <td rowspan = 2>通用OCR</td>
     <td>文本检测</td>
     <td rowspan = 2><a href="https://aistudio.baidu.com/community/app/91660/webUI?source=appMineRecent">在线体验</a></td>
@@ -82,7 +113,6 @@
   <tr>
     <td>文本识别</td>
   </tr>
-  <tr>
     <td rowspan = 4>通用表格识别</td>
     <td>版面区域检测</td>
     <td rowspan = 4><a href="https://aistudio.baidu.com/community/app/91661/webUI">在线体验</a></td>
@@ -204,15 +234,16 @@
   </ul></td>
   </tr>
   <tr>
-    <td rowspan = 7>文档场景信息抽取v3</td>
+    <td rowspan = 8>通用版面解析</td>
     <td>表格结构识别</td>
-    <td rowspan = 7><a href="https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter">在线体验</a></td>
-    <td rowspan = 7>文档图像场景信息抽取v3(PP-ChatOCRv3-doc)是飞桨特色的文档和图像智能分析解决方案,结合了 LLM 和 OCR 技术,一站式解决版面分析、生僻字、多页 pdf、表格、印章识别等常见的复杂文档信息抽取难点问题,结合文心大模型将海量数据和知识相融合,准确率高且应用广泛。开源版支持本地体验和本地部署,支持各个模块的微调训练。</td>
-    <td rowspan="7">
+    <td rowspan = 8>暂无</td>
+    <td rowspan = 8>版面解析是一种从文档图像中提取结构化信息的技术,主要用于将复杂的文档版面转换为机器可读的数据格式。这项技术在文档管理、信息提取和数据数字化等领域具有广泛的应用。版面解析通过结合光学字符识别(OCR)、图像处理和机器学习算法,能够识别和提取文档中的文本块、标题、段落、图片、表格以及其他版面元素。此过程通常包括版面分析、元素分析和数据格式化三个主要步骤,最终生成结构化的文档数据,提升数据处理的效率和准确性。</td>
+    <td rowspan="8">
   <ul>
-    <li>知识图谱的构建</li>
-    <li>在线新闻和社交媒体中特定事件相关信息的检测</li>
-    <li>学术文献中关键信息的抽取和分析(特别是需要对印章、扭曲图片、更复杂表格进行识别的场景)</li>
+    <li>金融与法律文档分析</li>
+    <li>历史文献和档案数字化</li>
+    <li>自动化表单填写</li>
+    <li>页面结构解析</li>
   </ul>
 </td>
   </tr>
@@ -226,14 +257,55 @@
     <td>文本识别</td>
   </tr>
   <tr>
+    <td>公式识别</td>
+  </tr>
+  <tr>
     <td>印章文本检测</td>
   </tr>
   <tr>
-    <td>文图像矫正</td>
+    <td>文图像矫正</td>
   </tr>
   <tr>
     <td>文档图像方向分类</td>
   </tr>
+  <tr>
+    <td rowspan = 2>公式识别</td>
+    <td>版面区域检测</td>
+    <td rowspan = 2>暂无</td>
+    <td rowspan = 2>公式识别是一种自动从文档或图像中识别和提取LaTeX公式内容及其结构的技术,广泛应用于数学、物理、计算机科学等领域的文档编辑和数据分析。通过使用计算机视觉和机器学习算法,公式识别能够将复杂的数学公式信息转换为可编辑的LaTeX格式,方便用户进一步处理和分析数据。</td>
+    <td rowspan = 2>
+    <ul>
+        <li>文档数字化与检索</li>
+        <li>公式搜索引擎</li>
+        <li>公式编辑器</li>
+        <li>自动化排版</li>
+      </ul>
+      </td>
+  </tr>
+  <tr>
+    <td>公式识别</td>
+  </tr>
+  <tr>
+    <td rowspan = 3>印章文本识别</td>
+    <td>版面区域检测</td>
+    <td rowspan = 3>暂无</td>
+    <td rowspan = 3>印章文本识别是一种自动从文档或图像中提取和识别印章内容的技术,印章文本的识别是文档处理的一部分,在很多场景都有用途,例如合同比对,出入库审核以及发票报销审核等场景。</td>
+    <td rowspan = 3>
+    <ul>
+        <li>合同和协议验证</li>
+        <li>支票处理</li>
+        <li>贷款审批</li>
+        <li>法律文书管理</li>
+      </ul>
+      </td>
+  </tr>
+  <tr>
+    <td>印章文本检测</td>
+  </tr>
+  <tr>
+    <td>文本识别</td>
+  </tr>
+  <tr>
 </table>
 
 

+ 158 - 6
docs/support_list/pipelines_list_en.md

@@ -13,7 +13,7 @@
     <th width="20%">Applicable Scenarios</th>
   </tr>
   <tr>
-    <td>General Image Classification</td>
+    <td>Image Classification</td>
     <td>Image Classification</td>
     <td><a href="https://aistudio.baidu.com/community/app/100061/webUI">Online Experience</a></td>
     <td>Image classification is a technique that assigns images to predefined categories. It is widely used in object recognition, scene understanding, and automatic annotation. Image classification can identify various objects such as animals, plants, traffic signs, etc., and categorize them based on their features. By leveraging deep learning models, image classification can automatically extract image features and perform accurate classification. The General Image Classification Pipeline is designed to solve image classification tasks for given images.</td>
@@ -26,7 +26,7 @@
     </td>
   </tr>
   <tr>
-    <td>General Object Detection</td>
+    <td>Object Detection</td>
     <td>Object Detection</td>
     <td><a href="https://aistudio.baidu.com/community/app/70230/webUI">Online Experience</a></td>
     <td>Object detection aims to identify the categories and locations of multiple objects in images or videos by generating bounding boxes to mark these objects. Unlike simple image classification, object detection not only recognizes what objects are in the image, such as people, cars, and animals, but also accurately determines the specific location of each object, usually represented by a rectangular box. This technology is widely used in autonomous driving, surveillance systems, and smart photo albums, relying on deep learning models (e.g., YOLO, Faster R-CNN) that efficiently extract features and perform real-time detection, significantly enhancing the computer's ability to understand image content.</td>
@@ -40,7 +40,7 @@
     </td>
   </tr>
   <tr>
-    <td>General Semantic Segmentation</td>
+    <td>Semantic Segmentation</td>
     <td>Semantic Segmentation</td>
     <td><a href="https://aistudio.baidu.com/community/app/100062/webUI?source=appCenter">Online Experience</a></td>
     <td>Semantic segmentation is a computer vision technique that assigns each pixel in an image to a specific category, enabling detailed understanding of image content. Semantic segmentation not only identifies the types of objects in an image but also classifies each pixel, allowing entire regions of the same category to be marked. For example, in a street scene image, semantic segmentation can distinguish pedestrians, cars, sky, and roads at the pixel level, forming a detailed label map. This technology is widely used in autonomous driving, medical image analysis, and human-computer interaction, often relying on deep learning models (e.g., FCN, U-Net) that use Convolutional Neural Networks (CNNs) to extract features and achieve high-precision pixel-level classification, providing a foundation for further intelligent analysis.</td>
@@ -53,7 +53,7 @@
     </td>
   </tr>
   <tr>
-    <td>General Instance Segmentation</td>
+    <td>Instance Segmentation</td>
     <td>Instance Segmentation</td>
     <td><a href="https://aistudio.baidu.com/community/app/100063/webUI">Online Experience</a></td>
     <td>Instance segmentation is a computer vision task that identifies object categories in images and distinguishes the pixels of different instances within the same category, enabling precise segmentation of each object. Instance segmentation can separately mark each car, person, or animal in an image, ensuring they are processed independently at the pixel level. For example, in a street scene image with multiple cars and pedestrians, instance segmentation can clearly separate the contours of each car and person, forming multiple independent region labels. This technology is widely used in autonomous driving, video surveillance, and robot vision, often relying on deep learning models (e.g., Mask R-CNN) that use CNNs for efficient pixel classification and instance differentiation, providing powerful support for understanding complex scenes.</td>
@@ -65,8 +65,39 @@
       </ul>
     </td>
   </tr>
+<tr>
+    <td rowspan = 7>Document Scene Information Extraction v3</td>
+    <td>Table Structure Recognition</td>
+    <td rowspan = 7><a href="https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter">Online Experience</a></td>
+    <td rowspan = 7>Document Image Scene Information Extraction v3 (PP-ChatOCRv3-doc) is a PaddlePaddle-specific intelligent document and image analysis solution that integrates LLM and OCR technologies to solve common complex document information extraction challenges such as layout analysis, rare characters, multi-page PDFs, tables, and seal recognition. By integrating the Wenxin large model, it combines vast data and knowledge, providing high accuracy and wide applicability. The open-source version supports local experience and deployment, and fine-tuning training for each module.</td>
+    <td rowspan="7">
+  <ul>
+    <li>Construction of knowledge graphs</li>
+    <li>Detection of information related to specific events in online news and social media</li>
+    <li>Extraction and analysis of key information in academic literature (especially in scenarios requiring recognition of seals, distorted images, and more complex tables)</li>
+  </ul>
+</td>
+  </tr>
+  <tr>
+    <td>Layout Area Detection</td>
+  </tr>
+  <tr>
+    <td>Text Detection</td>
+  </tr>
+  <tr>
+    <td>Text Recognition</td>
+  </tr>
+  <tr>
+    <td>Seal Text Detection</td>
+  </tr>
+  <tr>
+    <td>Text Image Correction</td>
+  </tr>
   <tr>
-    <td rowspan = 2>General OCR</td>
+    <td>Document Image Orientation Classification</td>
+  </tr>
+  <tr>
+    <td rowspan = 2>OCR</td>
     <td >Text Detection</td>
     <td rowspan = 2><a href="https://aistudio.baidu.com/community/app/91660/webUI?source=appMineRecent">Online Experience</a></td>
     <td rowspan = 2>OCR (Optical Character Recognition) is a technology that converts text in images into editable text. It is widely used in document digitization, information extraction, and data processing. OCR can recognize printed text, handwritten text, and even certain types of fonts and symbols. The General OCR Pipeline is designed to solve text recognition tasks, extracting text information from images and outputting it in text form. PP-OCRv4 is an end-to-end OCR system that achieves millisecond-level text content prediction on CPUs, achieving state-of-the-art (SOTA) performance in general scenarios. Based on this project, developers from academia, industry, and research have quickly implemented various OCR applications covering general, manufacturing, finance, transportation.</td>
@@ -82,7 +113,7 @@
     <td>Text Recognition</td>
   </tr>
 <tr>
-        <td rowspan = 4>General Table Recognition</td>
+        <td rowspan = 4>Table Recognition</td>
         <td>Layout Detection</td>
         <td rowspan = 4><a href="https://aistudio.baidu.com/community/app/91661/webUI">Online Experience</a></td>
         <td rowspan = 4>Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in data entry, information retrieval, and document analysis. By leveraging computer vision and machine learning algorithms, table recognition can convert complex table information into editable formats, facilitating further data processing and analysis by users</td>
@@ -152,6 +183,127 @@
         <li>Equipment Operating Condition Classification</li>
       </ul>
       </td>
+<tr>
+    <td>Multi-label Image Classification</td>
+    <td>Multi-label Image Classification</td>
+    <td>None</td>
+    <td>Image multi-label classification is a technology that assigns an image to multiple related categories simultaneously. It is widely used in image tagging, content recommendation, and social media analysis. It can identify multiple objects or features present in an image, such as both "dog" and "outdoor" labels in a single picture. By using deep learning models, image multi-label classification can automatically extract image features and perform accurate classification to provide more comprehensive information for users. This technology is significant in applications like intelligent search engines and automatic content generation.</td>
+    <td>
+    <ul>
+        <li>Medical image diagnosis</li>
+        <li>Complex scene recognition</li>
+        <li>Multi-target monitoring</li>
+        <li>Product attribute recognition</li>
+        <li>Ecological environment monitoring</li>
+        <li>Security monitoring</li>
+        <li>Disaster warning</li>
+      </ul>
+      </td>
+  </tr>
+  <tr>
+    <td>Small Object Detection</td>
+    <td>Small Object Detection</td>
+    <td>None</td>
+    <td>Small object detection is a technology specifically for identifying small objects in images. It is widely used in surveillance, autonomous driving, and satellite image analysis. It can accurately find and classify small-sized objects like pedestrians, traffic signs, or small animals in complex scenes. By using deep learning algorithms and optimized convolutional neural networks, small object detection can effectively enhance the recognition ability of small objects, ensuring that important information is not missed in practical applications. This technology plays an important role in improving safety and automation levels.</td>
+    <td>
+  <ul>
+    <li>Pedestrian detection in autonomous vehicles</li>
+    <li>Identification of small buildings in satellite images</li>
+    <li>Detection of small traffic signs in intelligent transportation systems</li>
+    <li>Identification of small intruding objects in security surveillance</li>
+    <li>Detection of small defects in industrial inspection</li>
+    <li>Monitoring of small animals in drone images</li>
+  </ul>
+</td>
+  </tr>
+  <tr>
+    <td>Image Anomaly Detection</td>
+    <td>Image Anomaly Detection</td>
+    <td>None</td>
+    <td>Image anomaly detection is a technology that identifies images that deviate from or do not conform to normal patterns by analyzing their content. It is widely used in industrial quality inspection, medical image analysis, and security surveillance. By using machine learning and deep learning algorithms, image anomaly detection can automatically identify potential defects, anomalies, or abnormal behavior in images, helping us detect problems and take appropriate measures promptly. Image anomaly detection systems are designed to automatically detect and label abnormal situations in images to improve work efficiency and accuracy.</td>
+    <td>
+    <ul>
+    <li>Industrial quality control</li>
+    <li>Medical image analysis</li>
+    <li>Anomaly detection in surveillance videos</li>
+    <li>Identification of violations in traffic monitoring</li>
+    <li>Obstacle detection in autonomous driving</li>
+    <li>Agricultural pest and disease monitoring</li>
+    <li>Pollutant identification in environmental monitoring</li>
+  </ul></td>
+  </tr>
+  <tr>
+    <td rowspan = 8>Layout Parsing</td>
+    <td>Table Structure Recognition</td>
+    <td rowspan = 8>None</td>
+    <td rowspan = 8>Layout analysis is a technology for extracting structured information from document images, primarily used to convert complex document layouts into machine-readable data formats. This technology has wide applications in document management, information extraction, and data digitization. By combining optical character recognition (OCR), image processing, and machine learning algorithms, layout analysis can identify and extract text blocks, titles, paragraphs, images, tables, and other layout elements from documents. This process typically includes three main steps: layout analysis, element analysis, and data formatting, ultimately generating structured document data that enhances data processing efficiency and accuracy.</td>
+    <td rowspan="8">
+  <ul>
+    <li>Financial and legal document analysis</li>
+    <li>Digitization of historical documents and archives</li>
+    <li>Automated form filling</li>
+    <li>Page structure analysis</li>
+  </ul>
+</td>
+  </tr>
+  <tr>
+    <td>Layout Area Detection</td>
+  </tr>
+  <tr>
+    <td>Text Detection</td>
+  </tr>
+  <tr>
+    <td>Text Recognition</td>
+  </tr>
+  <tr>
+    <td>Formula Recognition</td>
+  </tr>
+  <tr>
+    <td>Seal Text Detection</td>
+  </tr>
+  <tr>
+    <td>Text Image Correction</td>
+  </tr>
+  <tr>
+    <td>Document Image Orientation Classification</td>
+  </tr>
+    <tr>
+    <td rowspan = 2>Formula Recognition</td>
+    <td>Layout Area Detection</td>
+    <td rowspan = 2>None</td>
+    <td rowspan = 2>Formula recognition is a technology that automatically identifies and extracts LaTeX formula content and its structure from documents or images. It is widely used in document editing and data analysis in fields such as mathematics, physics, and computer science. By using computer vision and machine learning algorithms, formula recognition can convert complex mathematical formula information into an editable LaTeX format, facilitating further data processing and analysis by users.</td>
+    <td rowspan = 2>
+    <ul>
+        <li>Document digitization and retrieval</li>
+        <li>Formula search engine</li>
+        <li>Formula editor</li>
+        <li>Automated typesetting</li>
+      </ul>
+      </td>
+  </tr>
+  <tr>
+    <td>Formula Recognition</td>
+  </tr>
+  <tr>
+    <td rowspan = 3>Seal Text Recognition</td>
+    <td>Layout Area Detection</td>
+    <td rowspan = 3>None</td>
+    <td rowspan = 3>Seal text recognition is a technology that automatically extracts and recognizes seal content from documents or images. Recognizing seal text is part of document processing and has applications in many scenarios, such as contract comparison, inventory audit, and invoice reimbursement audit.</td>
+    <td rowspan = 3>
+    <ul>
+        <li>Contract and agreement validation</li>
+        <li>Check processing</li>
+        <li>Loan approval</li>
+        <li>Legal document management</li>
+      </ul>
+      </td>
+  </tr>
+  <tr>
+    <td>Seal Text Detection</td>
+  </tr>
+  <tr>
+    <td>Text Recognition</td>
+  </tr>
 </table>
 
 ## 2. Featured Pipelines