Эх сурвалжийг харах

update docs (#2302)

* update modules docs

* fix hyperlinks

* update module docs
AmberC0209 1 жил өмнө
parent
commit
1884d30207
59 өөрчлөгдсөн 621 нэмэгдсэн , 354 устгасан
  1. 18 18
      README.md
  2. 18 18
      README_en.md
  3. 2 6
      docs/module_usage/tutorials/cv_modules/anomaly_detection.md
  4. 7 11
      docs/module_usage/tutorials/cv_modules/anomaly_detection_en.md
  5. 3 7
      docs/module_usage/tutorials/cv_modules/face_detection.md
  6. 6 10
      docs/module_usage/tutorials/cv_modules/face_detection_en.md
  7. 3 8
      docs/module_usage/tutorials/cv_modules/human_detection.md
  8. 6 10
      docs/module_usage/tutorials/cv_modules/human_detection_en.md
  9. 79 11
      docs/module_usage/tutorials/cv_modules/image_classification.md
  10. 79 2
      docs/module_usage/tutorials/cv_modules/image_classification_en.md
  11. 2 6
      docs/module_usage/tutorials/cv_modules/image_feature.md
  12. 2 5
      docs/module_usage/tutorials/cv_modules/image_feature_en.md
  13. 30 3
      docs/module_usage/tutorials/cv_modules/instance_segmentation.md
  14. 30 2
      docs/module_usage/tutorials/cv_modules/instance_segmentation_en.md
  15. 2 6
      docs/module_usage/tutorials/cv_modules/mainbody_detection.md
  16. 6 10
      docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md
  17. 2 6
      docs/module_usage/tutorials/cv_modules/ml_classification.md
  18. 2 5
      docs/module_usage/tutorials/cv_modules/ml_classification_en.md
  19. 63 3
      docs/module_usage/tutorials/cv_modules/object_detection.md
  20. 61 2
      docs/module_usage/tutorials/cv_modules/object_detection_en.md
  21. 2 6
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md
  22. 2 7
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition_en.md
  23. 9 3
      docs/module_usage/tutorials/cv_modules/semantic_segmentation.md
  24. 10 6
      docs/module_usage/tutorials/cv_modules/semantic_segmentation_en.md
  25. 4 8
      docs/module_usage/tutorials/cv_modules/small_object_detection.md
  26. 7 11
      docs/module_usage/tutorials/cv_modules/small_object_detection_en.md
  27. 2 6
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md
  28. 3 6
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition_en.md
  29. 3 7
      docs/module_usage/tutorials/cv_modules/vehicle_detection.md
  30. 6 10
      docs/module_usage/tutorials/cv_modules/vehicle_detection_en.md
  31. 2 6
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md
  32. 2 7
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md
  33. 2 6
      docs/module_usage/tutorials/ocr_modules/formula_recognition.md
  34. 2 5
      docs/module_usage/tutorials/ocr_modules/formula_recognition_en.md
  35. 2 6
      docs/module_usage/tutorials/ocr_modules/layout_detection.md
  36. 6 10
      docs/module_usage/tutorials/ocr_modules/layout_detection_en.md
  37. 2 6
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.md
  38. 2 7
      docs/module_usage/tutorials/ocr_modules/seal_text_detection_en.md
  39. 2 6
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md
  40. 2 5
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md
  41. 2 6
      docs/module_usage/tutorials/ocr_modules/text_detection.md
  42. 36 4
      docs/module_usage/tutorials/ocr_modules/text_detection_en.md
  43. 0 3
      docs/module_usage/tutorials/ocr_modules/text_image_unwarping.md
  44. 32 3
      docs/module_usage/tutorials/ocr_modules/text_recognition.md
  45. 32 2
      docs/module_usage/tutorials/ocr_modules/text_recognition_en.md
  46. 2 6
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md
  47. 2 7
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md
  48. 2 6
      docs/module_usage/tutorials/time_series_modules/time_series_classification.md
  49. 2 7
      docs/module_usage/tutorials/time_series_modules/time_series_classification_en.md
  50. 2 6
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md
  51. 2 7
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting_en.md
  52. 1 1
      docs/pipeline_deploy/edge_deploy_en.md
  53. 2 2
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing_en.md
  54. 2 2
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md
  55. 2 2
      docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial_en.md
  56. 2 2
      docs/practical_tutorials/ts_anomaly_detection_en.md
  57. 2 2
      docs/practical_tutorials/ts_classification_en.md
  58. 2 2
      docs/practical_tutorials/ts_forecast_en.md
  59. 1 1
      docs/support_list/models_list_en.md

+ 18 - 18
README.md

@@ -55,10 +55,10 @@ PaddleX 3.0 是基于飞桨框架构建的低代码开发工具,它集成了
  ## 📊 能力支持
 
 
-PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在线体验**,您可以快速体验各个产线的预训练模型效果,如果您对产线的预训练模型效果满意,可以直接对产线进行[高性能推理](./docs/pipeline_deploy/high_performance_deploy.md)/[服务化部署](./docs/pipeline_deploy/service_deploy.md)/[端侧部署](./docs/pipeline_deploy/edge_deploy.md),如果不满意,您也可以使用产线的**二次开发**能力,提升效果。完整的产线开发流程请参考[PaddleX产线使用概览](./docs/pipeline_usage/pipeline_develop_guide.md)或各产线使用[教程](#-文档)。
+PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI Studio星河社区](https://aistudio.baidu.com/overview)上进行**在线体验**,您可以快速体验各个产线的预训练模型效果,如果您对产线的预训练模型效果满意,可以直接对产线进行[高性能推理](./docs/pipeline_deploy/high_performance_inference.md)/[服务化部署](./docs/pipeline_deploy/service_deploy.md)/[端侧部署](./docs/pipeline_deploy/edge_deploy.md),如果不满意,您也可以使用产线的**二次开发**能力,提升效果。完整的产线开发流程请参考[PaddleX产线使用概览](./docs/pipeline_usage/pipeline_develop_guide.md)或各产线使用[教程](#-文档)。
 
 
-此外,PaddleX 为开发者提供了基于[云端图形化开发界面](https://aistudio.baidu.com/pipeline/mine)的全流程开发工具, 点击【创建产线】,选择对应的任务场景和模型产线,就可以开启全流程开发。详细请参考[教程《零门槛开发产业级AI模型》](https://aistudio.baidu.com/practical/introduce/546656605663301)
+此外,PaddleX在[AI Studio星河社区](https://aistudio.baidu.com/overview)为开发者提供了基于[云端图形化开发界面](https://aistudio.baidu.com/pipeline/mine)的全流程开发工具, 点击【创建产线】,选择对应的任务场景和模型产线,就可以开启全流程开发。详细请参考[教程《零门槛开发产业级AI模型》](https://aistudio.baidu.com/practical/introduce/546656605663301)
 
 <table >
     <tr>
@@ -72,7 +72,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <th><a href = "https://aistudio.baidu.com/pipeline/mine">星河零代码产线</a></td>
     </tr>
     <tr>
-        <td>通用OCR</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md">通用OCR</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/91660/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -82,7 +82,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>文档场景信息抽取v3</td>
+        <td><a href="./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.md">文档场景信息抽取v3</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -92,7 +92,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>通用表格识别</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md">通用表格识别</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/91661?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -102,7 +102,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>通用目标检测</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md">通用目标检测</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/70230/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -112,7 +112,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>通用实例分割</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md">通用实例分割</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/100063/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -122,7 +122,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>通用图像分类</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md">通用图像分类</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/100061/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -132,7 +132,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>通用语义分割</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md">通用语义分割</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/100062/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -142,7 +142,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>时序预测</td>
+        <td><a href="./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md">时序预测</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/105706/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>🚧</td>
@@ -152,7 +152,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>时序异常检测</td>
+        <td><a href="./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md">时序异常检测</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/105708/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>🚧</td>
@@ -162,7 +162,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
     <tr>
-        <td>时序分类</td>
+        <td><a href="./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md">时序分类</a></td>
         <td><a href = "https://aistudio.baidu.com/community/app/105707/webUI?source=appMineRecent">链接</a></td>
         <td>✅</td>
         <td>🚧</td>
@@ -172,7 +172,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>✅</td>
     </tr>
         <tr>
-        <td>小目标检测</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md">小目标检测</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
@@ -182,7 +182,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>🚧</td>
     </tr>
         <tr>
-        <td>图像多标签分类</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md">图像多标签分类</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
@@ -192,7 +192,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>🚧</td>
     </tr>
     <tr>
-        <td>图像异常检测</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md">图像异常检测</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
@@ -202,7 +202,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>🚧</td>
     </tr>
     <tr>
-        <td>通用版面解析</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md">通用版面解析</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>🚧</td>
@@ -212,7 +212,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>🚧</td>
     </tr>
     <tr>
-        <td>公式识别</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md">公式识别</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>🚧</td>
@@ -222,7 +222,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持**在
         <td>🚧</td>
     </tr>
     <tr>
-        <td>印章文本识别</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md">印章文本识别</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>

+ 18 - 18
README_en.md

@@ -42,7 +42,7 @@ PaddleX 3.0 is a low-code development tool for AI models built on the PaddlePadd
 
 ## 📣 Recent Updates
 
-🔥🔥 **"PaddleX Document Information Personalized Extraction Upgrade"**, PP-ChatOCRv3 innovatively provides custom development functions for OCR models based on data fusion technology, offering stronger model fine-tuning capabilities. Millions of high-quality general OCR text recognition data are automatically integrated into vertical model training data at a specific ratio, solving the problem of weakened general text recognition capabilities caused by vertical model training in the industry. Suitable for practical scenarios in industries such as automated office, financial risk control, healthcare, education and publishing, and legal and government sectors. **October 17th (Thursday) at 19:00** live broadcast will provide a detailed interpretation of data fusion technology and how to use prompt engineering to achieve better information extraction results. [Registration Link](https://www.wjx.top/vm/mFhGfwx.aspx?udsid=772552)
+🔥🔥 **"PaddleX Document Information Personalized Extraction Upgrade"**, PP-ChatOCRv3 innovatively provides custom development functions for OCR models based on data fusion technology, offering stronger model fine-tuning capabilities. Millions of high-quality general OCR text recognition data are automatically integrated into vertical model training data at a specific ratio, solving the problem of weakened general text recognition capabilities caused by vertical model training in the industry. Suitable for practical scenarios in industries such as automated office, financial risk control, healthcare, education and publishing, and legal and government sectors. **October 24th (Thursday) 19:00** Join our live session for an in-depth analysis of the open-source version of PP-ChatOCRv3 and the outstanding advantages of PaddleX 3.0 Beta1 in terms of accuracy and speed. [Registration Link](https://www.wjx.top/vm/wpPu8HL.aspx?udsid=994465)
 
 🔥🔥 **9.30, 2024**, PaddleX 3.0 Beta1 open source version is officially released, providing **more than 200 models** that can be called with a simple Python API; achieve model full-process development based on unified commands, and open source the basic capabilities of the **PP-ChatOCRv3** pipeline; support **more than 100 models for high-performance inference and service-oriented deployment** (iterating continuously), **more than 7 key visual models for edge-deployment**; **more than 70 models have been adapted for the full development process of Ascend 910B**, **more than 15 models have been adapted for the full development process of Kunlun chips and Cambricon**
 
@@ -56,7 +56,7 @@ PaddleX is dedicated to achieving pipeline-level model training, inference, and
 ## 📊 What can PaddleX do?
 
 
-All pipelines of PaddleX support **online experience** and local **fast inference**. You can quickly experience the effects of each pre-trained pipeline. If you are satisfied with the effects of the pre-trained pipeline, you can directly perform [high-performance inference](./docs/pipeline_deploy/high_performance_inference_en.md) / [serving deployment](./docs/pipeline_deploy/service_deploy_en.md) / [edge deployment](./docs/pipeline_deploy/edge_deploy_en.md) on the pipeline. If not satisfied, you can also **Custom Development** to improve the pipeline effect. For the complete pipeline development process, please refer to the [PaddleX pipeline Development Tool Local Use Tutorial](./docs/pipeline_usage/pipeline_develop_guide_en.md).
+All pipelines of PaddleX support **online experience** on [AI Studio]((https://aistudio.baidu.com/overview)) and local **fast inference**. You can quickly experience the effects of each pre-trained pipeline. If you are satisfied with the effects of the pre-trained pipeline, you can directly perform [high-performance inference](./docs/pipeline_deploy/high_performance_inference_en.md) / [serving deployment](./docs/pipeline_deploy/service_deploy_en.md) / [edge deployment](./docs/pipeline_deploy/edge_deploy_en.md) on the pipeline. If not satisfied, you can also **Custom Development** to improve the pipeline effect. For the complete pipeline development process, please refer to the [PaddleX pipeline Development Tool Local Use Tutorial](./docs/pipeline_usage/pipeline_develop_guide_en.md).
 
 In addition, PaddleX provides developers with a full-process efficient model training and deployment tool based on a [cloud-based GUI](https://aistudio.baidu.com/pipeline/mine). Developers **do not need code development**, just need to prepare a dataset that meets the pipeline requirements to **quickly start model training**. For details, please refer to the tutorial ["Developing Industrial-level AI Models with Zero Barrier"](https://aistudio.baidu.com/practical/introduce/546656605663301).
 
@@ -72,7 +72,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <th><a href="https://aistudio.baidu.com/pipeline/mine">Zero-Code Development On AI Studio</a></td> 
     </tr>
     <tr>
-        <td>OCR</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md">OCR</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/91660/webUI?source=appMineRecent">Link</a></td> 
         <td>✅</td>
         <td>✅</td>
@@ -82,7 +82,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>PP-ChatOCRv3</td>
+        <td><a href="./docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md">PP-ChatOCRv3</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter">Link</a></td> 
         <td>✅</td>
         <td>✅</td>
@@ -92,7 +92,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Table Recognition</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md">Table Recognition</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/91661?source=appMineRecent">Link</a></td>
         <td>✅</td>
         <td>✅</td>
@@ -102,7 +102,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Object Detection</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md">Object Detection</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/70230/webUI?source=appMineRecent">Link</a></td> 
         <td>✅</td>
         <td>✅</td>
@@ -112,7 +112,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Instance Segmentation</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md">Instance Segmentation</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/100063/webUI?source=appMineRecent">Link</a></td> 
         <td>✅</td>
         <td>✅</td>
@@ -122,7 +122,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Image Classification</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md">Image Classification</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/100061/webUI?source=appMineRecent">Link</a></td> 
         <td>✅</td>
         <td>✅</td>
@@ -132,7 +132,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Semantic Segmentation</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md">Semantic Segmentation</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/100062/webUI?source=appMineRecent">Link</a></td> 
         <td>✅</td>
         <td>✅</td>
@@ -142,7 +142,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Time Series Forecasting</td>
+        <td><a href="./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md">Time Series Forecasting</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/105706/webUI?source=appMineRecent">Link</a></td>
         <td>✅</td>
         <td>🚧</td>
@@ -152,7 +152,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Time Series Anomaly Detection</td>
+        <td><a href="./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md">Time Series Anomaly Detection</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/105708/webUI?source=appMineRecent">Link</a></td>
         <td>✅</td>
         <td>🚧</td>
@@ -162,7 +162,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
     <tr>
-        <td>Time Series Classification</td>
+        <td><a href="./docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md">Time Series Classification</a></td>
         <td><a href="https://aistudio.baidu.com/community/app/105707/webUI?source=appMineRecent">Link</a></td>
         <td>✅</td>
         <td>🚧</td>
@@ -172,7 +172,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>✅</td>
     </tr>
         <tr>
-        <td>Small Object Detection</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md">Small Object Detection</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
@@ -182,7 +182,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>🚧</td>
     </tr>
         <tr>
-        <td>Multi-label Image Classification</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md">Multi-label Image Classification</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
@@ -192,7 +192,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>🚧</td>
     </tr>
     <tr>
-        <td>Image Anomaly Detection</td>
+        <td><a href="./docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md">Image Anomaly Detection</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>
@@ -202,7 +202,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>🚧</td>
     </tr>
     <tr>
-        <td>Layout Parsing</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing_en.md">Layout Parsing</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>🚧</td>
@@ -212,7 +212,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>🚧</td>
     </tr>
     <tr>
-        <td>Formula Recognition</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md">Formula Recognition</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>🚧</td>
@@ -222,7 +222,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
         <td>🚧</td>
     </tr>
     <tr>
-        <td>Seal Recognition</td>
+        <td><a href="./docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition_en.md">Seal Recognition</a></td>
         <td>🚧</td>
         <td>✅</td>
         <td>✅</td>

+ 2 - 6
docs/module_usage/tutorials/cv_modules/anomaly_detection.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|ROCAUC(Avg)|模型存储大小(M)|介绍|
 |-|-|-|-|
@@ -16,7 +14,6 @@
 
 **以上模型精度指标测量自 MVTec_AD 数据集。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -113,7 +110,7 @@ python main.py -c paddlex/configs/anomaly_detection/STFPM.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`STFPM.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`STFPM.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -124,8 +121,7 @@ python main.py -c paddlex/configs/anomaly_detection/STFPM.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 7 - 11
docs/module_usage/tutorials/cv_modules/anomaly_detection_en.md

@@ -7,15 +7,12 @@ Unsupervised anomaly detection is a technology that automatically identifies and
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 | Model | ROCAUC(Avg)| Model Size (M) | Description |
 |-|-|-|-|
 | STFPM | 0.962 | 22.5 | An unsupervised anomaly detection algorithm based on representation consists of a pre-trained teacher network and a student network with the same structure. The student network detects anomalies by matching its own features with the corresponding features in the teacher network. |
 
-The above model accuracy indicators are measured from the MVTec_AD dataset.
-</details>
+**The above model accuracy indicators are measured from the MVTec_AD dataset.**
 
 ## III. Quick Integration  <a id="quick"> </a> 
 Before quick integration, you need to install the PaddleX wheel package. For the installation method of the wheel package, please refer to the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md). After installing the wheel package, a few lines of code can complete the inference of the unsupervised anomaly detection module. You can switch models under this module freely, and you can also integrate the model inference of the unsupervised anomaly detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png) to your local machine.
@@ -116,7 +113,7 @@ python main.py -c paddlex/configs/anomaly_detection/STFPM.yaml \
 ```
 The steps required are:
 
-* Specify the path to the `.yaml` configuration file of the model (here it is `STFPM.yaml`)
+* Specify the path to the `.yaml` configuration file of the model (here it is `STFPM.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`
 
@@ -127,13 +124,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `STFPM.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 3 - 7
docs/module_usage/tutorials/cv_modules/face_detection.md

@@ -7,15 +7,12 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|mAP(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
 |PicoDet_LCNet_x2_5_face|35.8|33.7|537.0|28.9|基于PicoDet_LCNet_x2_5的人脸检测模型|
 
-注:以上精度指标为wider_face数据集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。
-</details>
+**注:以上精度指标为wider_face数据集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -176,7 +173,7 @@ python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PicoDet_LCNet_x2_5_face.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PicoDet_LCNet_x2_5_face.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -187,8 +184,7 @@ python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 6 - 10
docs/module_usage/tutorials/cv_modules/face_detection_en.md

@@ -7,15 +7,12 @@ Face detection is a fundamental task in object detection, aiming to automaticall
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 | Model | mAP(%)| GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
 |-|-|-|-|-|-|
 | PicoDet_LCNet_x2_5_face | 35.8 | 33.7 | 537.0 | 28.9 | Face detection model based on PicoDet_LCNet_x2_5 |
 
 **Note: The evaluation set for the above accuracy metrics is wider_face dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 ## III. Quick Integration  <a id="quick"> </a> 
 Before quick integration, you need to install the PaddleX wheel package. For the installation method of the wheel package, please refer to the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md). After installing the wheel package, a few lines of code can complete the inference of the face detection module. You can switch models under this module freely, and you can also integrate the model inference of the face detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_detection.png) to your local machine.
@@ -173,7 +170,7 @@ python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
 ```
 The steps required are:
 
-* Specify the path to the `.yaml` configuration file of the model (here it is `PicoDet_LCNet_x2_5_face.yaml`)
+* Specify the path to the `.yaml` configuration file of the model (here it is `PicoDet_LCNet_x2_5_face.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`
 
@@ -184,13 +181,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `PicoDet_LCNet_x2_5_face.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 3 - 8
docs/module_usage/tutorials/cv_modules/human_detection.md

@@ -7,9 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
-
 <table>
   <tr>
     <th >模型</th>
@@ -39,8 +36,7 @@
   </tr>
 </table>
 
-注:以上精度指标为CrowdHuman数据集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。
-</details>
+**注:以上精度指标为CrowdHuman数据集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 
 
 ## 三、快速集成
@@ -204,7 +200,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_human.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_human.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -215,8 +211,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 6 - 10
docs/module_usage/tutorials/cv_modules/human_detection_en.md

@@ -7,8 +7,6 @@ Human detection is a subtask of object detection, which utilizes computer vision
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 <table>
   <tr>
@@ -40,7 +38,6 @@ Human detection is a subtask of object detection, which utilizes computer vision
 </table>
 
 **Note: The evaluation set for the above accuracy metrics is CrowdHuman dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 
 ## III. Quick Integration
@@ -201,7 +198,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 ```
 The steps required are:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-S_human.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-S_human.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters for Model Tasks](../../instructions/config_parameters_common_en.md).
@@ -211,13 +208,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `PP-YOLOE-S_human.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 79 - 11
docs/module_usage/tutorials/cv_modules/image_classification.md

@@ -6,9 +6,84 @@
 图像分类模块是计算机视觉系统中的关键组成部分,负责对输入的图像进行分类。该模块的性能直接影响到整个计算机视觉系统的准确性和效率。图像分类模块通常会接收图像作为输入,然后通过深度学习或其他机器学习算法,根据图像的特性和内容,将其分类到预定义的类别中。例如,对于一个动物识别系统,图像分类模块可能需要将输入的图像分类为“猫”、“狗”、“马”等类别。图像分类模块的分类结果将作为输出,供其他模块或系统使用。
 
 ## 二、支持模型列表
+
+
+<table>
+    <tr>
+        <th>模型</th>
+        <th>Top1 Acc(%)</th>
+        <th>GPU推理耗时 (ms)</th>
+        <th>CPU推理耗时 (ms)</th>
+        <th>模型存储大小 (M)</th>
+    </tr>
+  <tr>
+    <td>CLIP_vit_base_patch16_224</td>
+    <td>85.36</td>
+    <td>13.1957</td>
+    <td>285.493</td>
+    <td >306.5 M</td>
+  </tr>
+  <tr>
+    <td>MobileNetV3_small_x1_0</td>
+    <td>68.2</td>
+    <td>6.00993</td>
+    <td>12.9598</td>
+    <td>10.5 M</td>
+  </tr>
+ <tr>
+    <td>PP-HGNet_small</td>
+    <td>81.51</td>
+    <td>5.50661</td>
+    <td>119.041</td>
+    <td>86.5 M</td>
+  </tr>
+  <tr>
+    <td>PP-HGNetV2-B0</td>
+    <td>77.77</td>
+    <td>6.53694</td>
+    <td>23.352</td>
+    <td>21.4 M</td>
+  </tr>
+<tr>
+    <td>PP-HGNetV2-B4</td>
+    <td>83.57</td>
+    <td>9.66407</td>
+    <td>54.2462</td>
+    <td>70.4 M</td>
+  </tr>
+<tr>
+    <td>PP-HGNetV2-B6</td>
+    <td>86.30</td>
+    <td>21.226</td>
+    <td>255.279</td>
+    <td>268.4 M</td>
+  </tr>
+<tr>
+    <td>PP-LCNet_x1_0</td>
+    <td>71.32</td>
+    <td>3.84845</td>
+    <td>9.23735</td>
+    <td>10.5 M</td>
+  </tr>
+ <tr>
+    <td>ResNet50</td>
+    <td>76.5</td>
+    <td>9.62383</td>
+    <td>64.8135</td>
+    <td>90.8 M</td>
+  </tr>
+<tr>
+    <td>SwinTransformer_tiny_patch4_window7_224</td>
+    <td>81.10</td>
+    <td>8.54846</td>
+    <td>156.306</td>
+    <td>100.1 M</td>
+  </tr>
+</table>
+
+> ❗ 以上列出的是图像分类模块重点支持的**9个核心模型**,该模块总共支持**80个模型**,完整的模型列表如下:
 <details>
    <summary> 👉模型列表详情</summary>
-
 <table>
   <tr>
     <th>模型</th>
@@ -424,8 +499,6 @@
     <td>32.1 M</td>
   </tr>
   <tr>
-
-  <tr>
     <td>PP-LCNetV2_base</td>
     <td>77.05</td>
     <td>5.23428</td>
@@ -448,7 +521,6 @@
     <td>14.6 M</td>
   </tr>
 <tr>
-<tr>
     <td>ResNet18_vd</td>
     <td>72.3</td>
     <td>3.53048</td>
@@ -526,7 +598,6 @@
     <td>235.185</td>
     <td>266.0 M</td>
   </tr>
-<tr>
   <tr>
     <td>StarNet-S1</td>
     <td>73.6</td>
@@ -556,7 +627,6 @@
     <td>43.2497</td>
     <td>28.9 M</td>
   </tr>
-<tr>
   <tr>
     <td>SwinTransformer_base_patch4_window7_224</td>
     <td>83.37</td>
@@ -600,10 +670,8 @@
     <td>156.306</td>
     <td>100.1 M</td>
   </tr>
-<tr>
 </table>
 
-
 **注:以上精度指标为 [ImageNet-1k](https://www.image-net.org/index.php) 验证集 Top1 Acc。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 </details>
 
@@ -621,6 +689,7 @@ for res in output:
     res.save_to_img("./output/")
     res.save_to_json("./output/res.json")
 ```
+
 关于更多 PaddleX 的单模型推理的 API 的使用方法,可以参考[PaddleX单模型Python脚本使用说明](../../instructions/model_python_API.md)。
 
 ## 四、二次开发
@@ -754,7 +823,7 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -764,8 +833,7 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 79 - 2
docs/module_usage/tutorials/cv_modules/image_classification_en.md

@@ -6,6 +6,83 @@
 The image classification module is a crucial component in computer vision systems, responsible for categorizing input images. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. Typically, the image classification module receives an image as input and, through deep learning or other machine learning algorithms, classifies it into predefined categories based on its characteristics and content. For instance, in an animal recognition system, the image classification module might need to classify an input image as "cat," "dog," "horse," etc. The classification results from the image classification module are then output for use by other modules or systems.
 
 ## II. List of Supported Models
+
+
+<table>
+    <tr>
+        <th>Model</th>
+        <th>Top1 Acc(%)</th>
+        <th>GPU Inference Time (ms)</th>
+        <th>CPU Inference Time (ms)</th>
+        <th>Model Storage Size (M)</th>
+    </tr>
+    <tr>
+        <td>CLIP_vit_base_patch16_224</td>
+        <td>85.36</td>
+        <td>13.1957</td>
+        <td>285.493</td>
+        <td>306.5 M</td>
+    </tr>
+    <tr>
+        <td>MobileNetV3_small_x1_0</td>
+        <td>68.2</td>
+        <td>6.00993</td>
+        <td>12.9598</td>
+        <td>10.5 M</td>
+    </tr>
+    <tr>
+        <td>PP-HGNet_small</td>
+        <td>81.51</td>
+        <td>5.50661</td>
+        <td>119.041</td>
+        <td>86.5 M</td>
+    </tr>
+    <tr>
+        <td>PP-HGNetV2-B0</td>
+        <td>77.77</td>
+        <td>6.53694</td>
+        <td>23.352</td>
+        <td>21.4 M</td>
+    </tr>
+    <tr>
+        <td>PP-HGNetV2-B4</td>
+        <td>83.57</td>
+        <td>9.66407</td>
+        <td>54.2462</td>
+        <td>70.4 M</td>
+    </tr>
+    <tr>
+        <td>PP-HGNetV2-B6</td>
+        <td>86.30</td>
+        <td>21.226</td>
+        <td>255.279</td>
+        <td>268.4 M</td>
+    </tr>
+    <tr>
+        <td>PP-LCNet_x1_0</td>
+        <td>71.32</td>
+        <td>3.84845</td>
+        <td>9.23735</td>
+        <td>10.5 M</td>
+    </tr>
+    <tr>
+        <td>ResNet50</td>
+        <td>76.5</td>
+        <td>9.62383</td>
+        <td>64.8135</td>
+        <td>90.8 M</td>
+    </tr>
+    <tr>
+        <td>SwinTransformer_tiny_patch4_window7_224</td>
+        <td>81.10</td>
+        <td>8.54846</td>
+        <td>156.306</td>
+        <td>100.1 M</td>
+    </tr>
+</table>
+
+> ❗ The above list features the **9 core models** that the image classification module primarily supports. In total, this module supports **80 models**. The complete list of models is as follows:
+
 <details>
    <summary> 👉Details of Model List</summary>
 
@@ -749,7 +826,7 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
 
 the following steps are required:
 
-* Specify the path of the model's `.yaml` configuration file (here it is `PP-LCNet_x1_0.yaml`)
+* Specify the path of the model's `.yaml` configuration file (here it is `PP-LCNet_x1_0.yaml`. When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path of the training dataset: `-o Global.dataset_dir`. Other related parameters can be set by modifying the fields under `Global` and `Train` in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first 2 GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file parameter instructions for the corresponding task module of the model [PaddleX Common Model Configuration File Parameters](../../instructions/config_parameters_common_en.md).
 
@@ -759,7 +836,7 @@ the following steps are required:
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 6
docs/module_usage/tutorials/cv_modules/image_feature.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 <table>
   <tr>
@@ -45,7 +43,6 @@
 
 
 **注:以上精度指标为 AliProducts recall@1。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -264,7 +261,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_rec.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_rec.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -275,8 +272,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 5
docs/module_usage/tutorials/cv_modules/image_feature_en.md

@@ -7,8 +7,6 @@ The image feature module is one of the important tasks in computer vision, prima
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 <table>
   <tr>
@@ -44,7 +42,6 @@ The image feature module is one of the important tasks in computer vision, prima
 </table>
 
 **Note: The above accuracy metrics are Recall@1 from AliProducts. All GPU inference times are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -265,7 +262,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml \
 ```
 The following steps are required:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-ShiTuV2_rec.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-ShiTuV2_rec.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`. 
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file instructions for the corresponding task module of the model [PaddleX Common Configuration File Parameters](../../instructions/config_parameters_common_en.md).
@@ -275,7 +272,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 30 - 3
docs/module_usage/tutorials/cv_modules/instance_segmentation.md

@@ -7,6 +7,34 @@
 
 ## 二、支持模型列表
 
+<table>
+    <tr>
+        <th>模型</th>
+        <th>Mask AP</th>
+        <th>GPU推理耗时(ms)</th>
+        <th>CPU推理耗时 (ms)</th>
+        <th>模型存储大小(M)</th>
+        <th>介绍</th>
+    </tr>
+    <tr>
+        <td>Mask-RT-DETR-H</td>
+        <td>50.6</td>
+        <td>132.693</td>
+        <td>4896.17</td>
+        <td>449.9 M</td>
+        <td rowspan="5">Mask-RT-DETR 是一种基于RT-DETR的实例分割模型,通过采用最优性能的更好的PP-HGNetV2作为骨干网络,构建了MaskHybridEncoder编码器,引入了IOU-aware Query Selection 技术,使其在相同推理耗时上取得了SOTA实例分割精度。</td>
+    </tr>
+    <tr>
+        <td>Mask-RT-DETR-L</td>
+        <td>45.7</td>
+        <td>46.5059</td>
+        <td>2575.92</td>
+        <td>113.6 M</td>
+    </tr>
+    </table>
+
+> ❗ 以上列出的是实例分割模块重点支持的**2个核心模型**,该模块总共支持**15个模型**,完整的模型列表如下:
+
 <details>
    <summary> 👉模型列表详情</summary>
 
@@ -323,7 +351,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为 `Mask-RT-DETR-L.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为 `Mask-RT-DETR-L.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -335,8 +363,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 30 - 2
docs/module_usage/tutorials/cv_modules/instance_segmentation_en.md

@@ -7,6 +7,34 @@ The instance segmentation module is a crucial component in computer vision syste
 
 ## II. Supported Model List
 
+<table>
+    <tr>
+        <th>Model</th>
+        <th>Mask AP</th>
+        <th>GPU Inference Time (ms)</th>
+        <th>CPU Inference Time (ms)</th>
+        <th>Model Size (M)</th>
+        <th>Description</th>
+    </tr>
+    <tr>
+        <td>Mask-RT-DETR-H</td>
+        <td>50.6</td>
+        <td>132.693</td>
+        <td>4896.17</td>
+        <td>449.9 M</td>
+        <td rowspan="5">Mask-RT-DETR is an instance segmentation model based on RT-DETR. By adopting the high-performance PP-HGNetV2 as the backbone network and constructing a MaskHybridEncoder encoder, along with introducing IOU-aware Query Selection technology, it achieves state-of-the-art (SOTA) instance segmentation accuracy with the same inference time.</td>
+    </tr>
+    <tr>
+        <td>Mask-RT-DETR-L</td>
+        <td>45.7</td>
+        <td>46.5059</td>
+        <td>2575.92</td>
+        <td>113.6 M</td>
+    </tr>
+    </table>
+
+> ❗ The above list features the **2 core models** that the image classification module primarily supports. In total, this module supports **15 models**. The complete list of models is as follows:
+
 <details>
    <summary> 👉Model List Details</summary>
 
@@ -322,7 +350,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 ```
 The following steps are required:
 
-* Specify the path to the `.yaml` configuration file of the model (here it is `Mask-RT-DETR-L.yaml`)
+* Specify the path to the `.yaml` configuration file of the model (here it is `Mask-RT-DETR-L.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`.
 Other related parameters can be set by modifying the fields under `Global` and `Train` in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify the first 2 GPUs for training: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration File Parameters Instructions](../../instructions/config_parameters_common_en.md).
@@ -332,7 +360,7 @@ Other related parameters can be set by modifying the fields under `Global` and `
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 6
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 <table>
   <tr>
@@ -32,7 +30,6 @@
 </table>
 
 注:以上精度指标为 PaddleClas主体检测数据集  mAP(0.5:0.95)。
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -190,7 +187,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_det.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_det.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -201,8 +198,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 6 - 10
docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md

@@ -7,8 +7,6 @@ Mainbody detection is a fundamental task in object detection, aiming to identify
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Details of Model List</summary>
 
 <table>
   <tr>
@@ -32,7 +30,6 @@ Mainbody detection is a fundamental task in object detection, aiming to identify
 </table>
 
 **Note: The evaluation set for the above accuracy metrics is  PaddleClas mainbody detection dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 ## III. Quick Integration  <a id="quick"> </a>
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -189,7 +186,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
 ```
 The steps required are:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-ShiTuV2_det.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-ShiTuV2_det.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters for Model Tasks](../../instructions/config_parameters_common_en.md).
@@ -199,13 +196,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `PP-ShiTuV2_det.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 2 - 6
docs/module_usage/tutorials/cv_modules/ml_classification.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 <table>
   <tr>
@@ -55,7 +53,6 @@
 
 
 **注:以上精度指标为[COCO2017](https://cocodataset.org/#home)的多标签分类任务mAP。**
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -259,7 +256,7 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_ML.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_ML.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -269,8 +266,7 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 5
docs/module_usage/tutorials/cv_modules/ml_classification_en.md

@@ -7,8 +7,6 @@ The image multi-label classification module is a crucial component in computer v
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 <table>
   <tr>
@@ -54,7 +52,6 @@ The image multi-label classification module is a crucial component in computer v
 </table>
 
 **Note: The above accuracy metrics are mAP for the multi-label classification task on [COCO2017](https://cocodataset.org/#home).**
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -257,7 +254,7 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
 ```
 the following steps are required:
 
-* Specify the path of the model's `.yaml` configuration file (here it is `PP-LCNet_x1_0_ML.yaml`)
+* Specify the path of the model's `.yaml` configuration file (here it is `PP-LCNet_x1_0_ML.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path of the training dataset: `-o Global.dataset_dir`. Other related parameters can be set by modifying the fields under `Global` and `Train` in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first 2 GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file parameter instructions for the corresponding task module of the model [PaddleX Common Model Configuration File Parameters](../../instructions/config_parameters_common_en.md).
 
@@ -267,7 +264,7 @@ the following steps are required:
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 63 - 3
docs/module_usage/tutorials/cv_modules/object_detection.md

@@ -6,6 +6,67 @@
 目标检测模块是计算机视觉系统中的关键组成部分,负责在图像或视频中定位和标记出包含特定目标的区域。该模块的性能直接影响到整个计算机视觉系统的准确性和效率。目标检测模块通常会输出目标区域的边界框(Bounding Boxes),这些边界框将作为输入传递给目标识别模块进行后续处理。
 
 ## 二、支持模型列表
+
+<table >
+  <tr>
+    <th>模型</th>
+    <th>mAP(%)</th>
+    <th>GPU推理耗时 (ms)</th>
+    <th>CPU推理耗时 (ms)</th>
+    <th>模型存储大小 (M)</th>
+    <th>介绍</th>
+  </tr>
+  <tr>
+    <td>PicoDet-L</td>
+    <td>42.6</td>
+    <td>16.6715</td>
+    <td>169.904</td>
+    <td>20.9 M</td>
+    <td rowspan="2">PP-PicoDet是一种全尺寸、棱视宽目标的轻量级目标检测算法,它考虑移动端设备运算量。与传统目标检测算法相比,PP-PicoDet具有更小的模型尺寸和更低的计算复杂度,并在保证检测精度的同时更高的速度和更低的延迟。</td>
+  </tr>
+  <tr>
+    <td>PicoDet-S</td>
+    <td>29.1</td>
+    <td>14.097</td>
+    <td>37.6563</td>
+    <td>4.4 M</td>
+
+  </tr>
+    <tr>
+    <td>PP-YOLOE_plus-L</td>
+    <td>52.9</td>
+    <td>33.5644</td>
+    <td>814.825</td>
+    <td>185.3 M</td>
+    <td rowspan="2">PP-YOLOE_plus 是一种是百度飞桨视觉团队自研的云边一体高精度模型PP-YOLOE迭代优化升级的版本,通过使用Objects365大规模数据集、优化预处理,大幅提升了模型端到端推理速度。</td>
+  </tr>
+  <tr>
+    <td>PP-YOLOE_plus-S</td>
+    <td>43.7</td>
+    <td>16.8884</td>
+    <td>223.059</td>
+    <td>28.3 M</td>
+
+  </tr>
+  <tr>
+    <td>RT-DETR-H</td>
+    <td>56.3</td>
+    <td>114.814</td>
+    <td>3933.39</td>
+    <td>435.8 M</td>
+    <td rowspan="2">RT-DETR是第一个实时端到端目标检测器。该模型设计了一个高效的混合编码器,满足模型效果与吞吐率的双需求,高效处理多尺度特征,并提出了加速和优化的查询选择机制,以优化解码器查询的动态化。RT-DETR支持通过使用不同的解码器来实现灵活端到端推理速度。</td>
+  </tr>
+  <tr>
+    <td>RT-DETR-L</td>
+    <td>53.0</td>
+    <td>34.5252</td>
+    <td>1454.27</td>
+    <td>113.7 M</td>
+
+  </tr>
+</table>
+
+> ❗ 以上列出的是目标检测模块重点支持的**6个核心模型**,该模块总共支持**37个模型**,完整的模型列表如下:
 <details>
    <summary> 👉模型列表详情</summary>
 
@@ -519,7 +580,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PicoDet-S.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PicoDet-S.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -530,8 +591,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 61 - 2
docs/module_usage/tutorials/cv_modules/object_detection_en.md

@@ -6,6 +6,65 @@
 The object detection module is a crucial component in computer vision systems, responsible for locating and marking regions containing specific objects in images or videos. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. The object detection module typically outputs bounding boxes for the target regions, which are then passed as input to the object recognition module for further processing.
 
 ## II. List of Supported Models
+
+<table>
+  <tr>
+    <th>Model</th>
+    <th>mAP(%)</th>
+    <th>GPU Inference Time (ms)</th>
+    <th>CPU Inference Time (ms)</th>
+    <th>Model Storage Size (M)</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>PicoDet-L</td>
+    <td>42.6</td>
+    <td>16.6715</td>
+    <td>169.904</td>
+    <td>20.9 M</td>
+    <td rowspan="2">PP-PicoDet is a lightweight object detection algorithm for full-size, wide-angle targets, considering the computational capacity of mobile devices. Compared to traditional object detection algorithms, PP-PicoDet has a smaller model size and lower computational complexity, achieving higher speed and lower latency while maintaining detection accuracy.</td>
+  </tr>
+  <tr>
+    <td>PicoDet-S</td>
+    <td>29.1</td>
+    <td>14.097</td>
+    <td>37.6563</td>
+    <td>4.4 M</td>
+  </tr>
+  <tr>
+    <td>PP-YOLOE_plus-L</td>
+    <td>52.9</td>
+    <td>33.5644</td>
+    <td>814.825</td>
+    <td>185.3 M</td>
+    <td rowspan="2">PP-YOLOE_plus is an upgraded version of the high-precision cloud-edge integrated model PP-YOLOE, developed by Baidu's PaddlePaddle vision team. By using the large-scale Objects365 dataset and optimizing preprocessing, it significantly enhances the model's end-to-end inference speed.</td>
+  </tr>
+  <tr>
+    <td>PP-YOLOE_plus-S</td>
+    <td>43.7</td>
+    <td>16.8884</td>
+    <td>223.059</td>
+    <td>28.3 M</td>
+  </tr>
+  <tr>
+    <td>RT-DETR-H</td>
+    <td>56.3</td>
+    <td>114.814</td>
+    <td>3933.39</td>
+    <td>435.8 M</td>
+    <td rowspan="2">RT-DETR is the first real-time end-to-end object detector. The model features an efficient hybrid encoder to meet both model performance and throughput requirements, efficiently handling multi-scale features, and proposes an accelerated and optimized query selection mechanism to optimize the dynamics of decoder queries. RT-DETR supports flexible end-to-end inference speeds by using different decoders.</td>
+  </tr>
+  <tr>
+    <td>RT-DETR-L</td>
+    <td>53.0</td>
+    <td>34.5252</td>
+    <td>1454.27</td>
+    <td>113.7 M</td>
+  </tr>
+</table>
+
+> ❗ The above list features the **6 core models** that the image classification module primarily supports. In total, this module supports **37 models**. The complete list of models is as follows:
+
 <details>
    <summary> 👉Details of Model List</summary>
 
@@ -527,7 +586,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 ```
 The following steps are required:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PicoDet-S.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PicoDet-S.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`.
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file instructions for the corresponding task module of the model [PaddleX Common Configuration File Parameters](../../instructions/config_parameters_common_en.md).
@@ -537,7 +596,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 6
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md

@@ -7,15 +7,12 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|mA(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
 |PP-LCNet_x1_0_pedestrian_attribute|92.2|3.84845|9.23735|6.7 M  |PP-LCNet_x1_0_pedestrian_attribute 是一种基于PP-LCNet的轻量级行人属性识别模型,包含26个类别|
 
 **注:以上精度指标为 PaddleX 内部自建数据集 mA。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -203,7 +200,7 @@ python main.py -c paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_pedestrian_attribute.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_pedestrian_attribute.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -215,8 +212,7 @@ python main.py -c paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 7
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition_en.md

@@ -8,16 +8,11 @@ Pedestrian attribute recognition is a crucial component in computer vision syste
 ## II. Supported Model List
 
 
-
-<details>
-   <summary> 👉 Model List Details</summary>
-
 | Model | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
 |-|-|-|-|-|-|
 | PP-LCNet_x1_0_pedestrian_attribute | 92.2 |3.84845 | 9.23735| 6.7 M | PP-LCNet_x1_0_pedestrian_attribute is a lightweight pedestrian attribute recognition model based on PP-LCNet, covering 26 categories |
 
 **Note: The above accuracy metrics are mA on PaddleX's internal self-built dataset. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 ## <span id="lable">III. Quick Integration</span>
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -205,7 +200,7 @@ python main.py -c paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_
 ```
 the following steps are required:
 
-* Specify the path of the model's `.yaml` configuration file (here it is `PP-LCNet_x1_0_pedestrian_attribute.yaml`)
+* Specify the path of the model's `.yaml` configuration file (here it is `PP-LCNet_x1_0_pedestrian_attribute.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path of the training dataset: `-o Global.dataset_dir`. Other related parameters can be set by modifying the fields under `Global` and `Train` in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first 2 GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file parameter instructions for the corresponding task module of the model [PaddleX Common Model Configuration File Parameters](../../instructions/config_parameters_common_en.md).
 
@@ -215,7 +210,7 @@ the following steps are required:
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 9 - 3
docs/module_usage/tutorials/cv_modules/semantic_segmentation.md

@@ -7,6 +7,13 @@
 
 ## 二、支持模型列表
 
+|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|
+|-|-|-|-|-|
+|OCRNet_HRNet-W48|82.15|78.9976|2226.95|249.8 M|
+|PP-LiteSeg-T|73.10|7.6827|138.683|28.5 M|
+
+> ❗ 以上列出的是语义分割模块重点支持的**2个核心模型**,该模块总共支持**18个模型**,完整的模型列表如下:
+
 <details>
    <summary> 👉模型列表详情</summary>
 
@@ -230,7 +237,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 
 需要如下几步:
 
-* 指定模型的.yaml 配置文件路径(此处为 `PP-LiteSeg-T.yam`)
+* 指定模型的.yaml 配置文件路径(此处为 `PP-LiteSeg-T.yam`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -241,8 +248,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 10 - 6
docs/module_usage/tutorials/cv_modules/semantic_segmentation_en.md

@@ -7,9 +7,15 @@ Semantic segmentation is a technique in computer vision that classifies each pix
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
+|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
+|-|-|-|-|-|
+|OCRNet_HRNet-W48|82.15|78.9976|2226.95|249.8 M|
+|PP-LiteSeg-T|73.10|7.6827|138.683|28.5 M|
 
+> ❗ The above list features the **2 core models** that the image classification module primarily supports. In total, this module supports **18 models**. The complete list of models is as follows:
+
+<details>
+   <summary> 👉Model List Details</summary>
 |Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
 |-|-|-|-|-|
 |Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|
@@ -241,7 +247,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 
 You need to follow these steps:
 
-* Specify the `.yaml` configuration file path for the model (here it's `PP-LiteSeg-T.yaml`).
+* Specify the `.yaml` configuration file path for the model (here it's `PP-LiteSeg-T.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)).
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 
@@ -252,9 +258,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
 * PaddleX abstracts the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-
-After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, including whether the training task completed successfully, produced weight metrics, and related file paths.
 * `train.log`: Training log file, recording model metric changes, loss changes, etc.

+ 4 - 8
docs/module_usage/tutorials/cv_modules/small_object_detection.md

@@ -7,9 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
-
 
 <table>
   <tr>
@@ -49,8 +46,8 @@
   </tr>
 </table>
 
-注:以上精度指标为 VisDrone-DET 验证集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。
-</details>
+**注:以上精度指标为 VisDrone-DET 验证集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32**
+
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -242,7 +239,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yam
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE_plus_SOD-S.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE_plus_SOD-S.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -253,8 +250,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yam
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 7 - 11
docs/module_usage/tutorials/cv_modules/small_object_detection_en.md

@@ -7,9 +7,6 @@ Small object detection typically refers to accurately detecting and locating sma
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Details of Model List</summary>
-
 <table>
   <tr>
     <th>Model</th>
@@ -49,7 +46,7 @@ Small object detection typically refers to accurately detecting and locating sma
 </table>
 
 **Note: The evaluation set for the above accuracy metrics is VisDrone-DET dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
+
 
 ## III. Quick Integration  <a id="quick"> </a> 
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md) 
@@ -240,7 +237,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yam
 ```
 The steps required are:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE_plus_SOD-S.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE_plus_SOD-S.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters for Model Tasks](../../instructions/config_parameters_common_en.md).
@@ -250,13 +247,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `PP-YOLOE_plus_SOD-S.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 2 - 6
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|mA(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
@@ -16,7 +14,6 @@
 
 **注:以上精度指标为 VeRi 数据集mA。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -185,7 +182,7 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_vehicle_attribute.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_vehicle_attribute.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -196,8 +193,7 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 3 - 6
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition_en.md

@@ -7,8 +7,6 @@ Vehicle attribute recognition is a crucial component in computer vision systems.
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 | Model | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
 |-|-|-|-|-|-|
@@ -16,7 +14,6 @@ Vehicle attribute recognition is a crucial component in computer vision systems.
 
 **Note: The above accuracy metrics are mA on the VeRi dataset. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
-</details>
 
 ## <span id="lable">III. Quick Integration</span>
 
@@ -185,7 +182,7 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 ```
 The steps required are:
 
-* Specify the path to the model's `.yaml` configuration file (here it's `PP-LCNet_x1_0_vehicle_attribute.yaml`)
+* Specify the path to the model's `.yaml` configuration file (here it's `PP-LCNet_x1_0_vehicle_attribute.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters](../../instructions/config_parameters_common_en.md).
@@ -196,7 +193,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;
@@ -244,7 +241,7 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 ```
 Similar to model training and evaluation, the following steps are required:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-LCNet_x1_0_vehicle_attribute.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-LCNet_x1_0_vehicle_attribute.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Set the mode to model inference prediction: `-o Global.mode=predict`
 * Specify the model weights path: `-o Predict.model_dir="./output/best_model/inference"`
 * Specify the input data path: `-o Predict.input="..."`

+ 3 - 7
docs/module_usage/tutorials/cv_modules/vehicle_detection.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 <table>
   <tr>
@@ -36,8 +34,7 @@
   </tr>
 </table>
 
-注:以上精度指标为PPVehicle 验证集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。
-</details>
+**注:以上精度指标为PPVehicle 验证集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -195,7 +192,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_vehicle.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_vehicle.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -206,8 +203,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 6 - 10
docs/module_usage/tutorials/cv_modules/vehicle_detection_en.md

@@ -7,8 +7,6 @@ Vehicle detection is a subtask of object detection, specifically referring to th
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 <table>
   <tr>
@@ -34,7 +32,6 @@ Vehicle detection is a subtask of object detection, specifically referring to th
     <td>775.6</td>
     <td>196.02</td>
   </tr>
-</table>
 
 **Note: The evaluation set for the above accuracy metrics is PPVehicle dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 </details>
@@ -194,7 +191,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 ```
 The steps required are:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-S_vehicle.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-S_vehicle.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters for Model Tasks](../../instructions/config_parameters_common_en.md).
@@ -204,13 +201,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `PP-YOLOE-S_vehicle.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 2 - 6
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md

@@ -7,15 +7,12 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|Top-1 Acc(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
 |PP-LCNet_x1_0_doc_ori|99.06|3.84845|9.23735|7|基于PP-LCNet_x1_0的文档图像分类模型,含有四个类别,即0度,90度,180度,270度|
 
 **注:以上精度指标的评估集是自建的数据集,覆盖证件和文档等多个场景,包含 1000 张图片。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
-</details>
 
 ## 三、快速集成
 
@@ -181,7 +178,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_doc_ori.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_doc_ori.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -191,8 +188,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 7
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md

@@ -7,15 +7,12 @@ The document image orientation classification module is aim to distinguish the o
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 | Model | Top-1 Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
 |-|-|-|-|-|-|
 | PP-LCNet_x1_0_doc_ori | 99.06 | 3.84845|9.23735 | 7 | A document image classification model based on PP-LCNet_x1_0, with four categories: 0°, 90°, 180°, 270° |
 
 **Note: The above accuracy metrics are evaluated on a self-built dataset covering various scenarios such as IDs and documents, containing 1000 images. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 ## III. Quick Integration
 
@@ -187,7 +184,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 
 You need to follow these steps:
 
-* Specify the path to the model's `.yaml` configuration file (here, `PP-LCNet_x1_0_doc_ori.yaml`).
+* Specify the path to the model's `.yaml` configuration file (here, `PP-LCNet_x1_0_doc_ori.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)).
 * Set the mode to model training: `-o Global.mode=train`.
 * Specify the training dataset path: `-o Global.dataset_dir`.
 
@@ -198,9 +195,7 @@ Other relevant parameters can be set by modifying fields under `Global` and `Tra
 
 * During model training, PaddleX automatically saves the model weight files, defaulting to `output`. If you want to specify a different save path, you can set it using the `-o Global.output` field in the configuration file.
 * PaddleX abstracts away the concept of dynamic graph weights and static graph weights. During model training, it produces both dynamic and static graph weights. For model inference, it defaults to using static graph weights.
-* To train other models, specify the corresponding configuration file. The relationship between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
+* After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically including the following:
 
 * `train_result.json`: Training result record file, which records whether the training task was completed normally, as well as the output weight metrics and related file paths.
 * `train.log`: Training log file, which records changes in model metrics and loss during training.

+ 2 - 6
docs/module_usage/tutorials/ocr_modules/formula_recognition.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 <table>
   <tr>
@@ -31,7 +29,6 @@
 </table>
 
 **注:以上精度指标测量自 LaTeX-OCR公式识别测试集。**
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -225,7 +222,7 @@ python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml  \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`LaTeX_OCR_rec.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`LaTeX_OCR_rec.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -235,8 +232,7 @@ python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml  \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 5
docs/module_usage/tutorials/ocr_modules/formula_recognition_en.md

@@ -7,8 +7,6 @@ The formula recognition module is a crucial component of OCR (Optical Character
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 <table>
   <tr>
@@ -31,7 +29,6 @@ The formula recognition module is a crucial component of OCR (Optical Character
 </table>
 
 **Note: The above accuracy metrics are measured on the LaTeX-OCR formula recognition test set.**
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md).
@@ -223,7 +220,7 @@ python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml  \
 ```
 The following steps are required:
 
-* Specify the `.yaml` configuration file path for the model (here it is `LaTeX_OCR_rec.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `LaTeX_OCR_rec.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`. 
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file instructions for the corresponding task module of the model [PaddleX Common Configuration File Parameters](../../instructions/config_parameters_common_en.md).
@@ -233,7 +230,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 6
docs/module_usage/tutorials/ocr_modules/layout_detection.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|mAP(0.5)(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
@@ -18,7 +16,6 @@
 |RT-DETR-H_layout_17cls|92.6|115.1|3827.2|470.2|基于RT-DETR-H在中英文论文、杂志和研报等场景上自建数据集训练的高精度版面区域定位模型,包含17个版面常见类别,分别是:段落标题、图片、文本、数字、摘要、内容、图表标题、公式、表格、表格标题、参考文献、文档标题、脚注、页眉、算法、页脚、印章|
 
 **注:以上精度指标的评估集是 PaddleOCR 自建的版面区域分析数据集,包含中英文论文、杂志和研报等常见的 1w 张文档类型图片。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -178,7 +175,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PicoDet-L_layout_3cls.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PicoDet-L_layout_3cls.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -190,8 +187,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 6 - 10
docs/module_usage/tutorials/ocr_modules/layout_detection_en.md

@@ -7,8 +7,6 @@ The core task of structure analysis is to parse and segment the content of input
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉Model List Details</summary>
 
 | Model | mAP(0.5) (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
 |-|-|-|-|-|-|
@@ -18,7 +16,6 @@ The core task of structure analysis is to parse and segment the content of input
 | RT-DETR-H_layout_17cls | 92.6 | 115.1 | 3827.2 | 470.2 | A high-precision layout area localization model trained on a self-constructed dataset based on RT-DETR-H for scenarios such as Chinese and English papers, magazines, and research reports includes 17 common layout categories, namely: paragraph titles, images, text, numbers, abstracts, content, chart titles, formulas, tables, table titles, references, document titles, footnotes, headers, algorithms, footers, and seals. |
 
 **Note: The evaluation set for the above accuracy metrics is PaddleOCR's self-built layout region analysis dataset, containing 10,000 images of common document types, including English and Chinese papers, magazines, research reports, etc. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
-</details>
 
 ## III. Quick Integration  <a id="quick"> </a>
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Tutorial](../../../installation/installation_en.md)
@@ -179,7 +176,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 ```
 The steps required are:
 
-* Specify the path to the `.yaml` configuration file of the model (here it is `PicoDet-L_layout_3cls.yaml`)
+* Specify the path to the `.yaml` configuration file of the model (here it is `PicoDet-L_layout_3cls.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`
 
@@ -190,13 +187,12 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), the following steps are required:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
-* Specify the `.yaml` configuration file path of the model (here it is `PicoDet-L_layout_3cls.yaml`)
-* Set the mode to model evaluation: `-o Global.mode=evaluate`
-* Specify the path of the validation dataset: `-o Global.dataset_dir`
-Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
 </details>
 
 ### **4.3 Model Evaluation**

+ 2 - 6
docs/module_usage/tutorials/ocr_modules/seal_text_detection.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
@@ -17,7 +15,6 @@
 
 **注:以上精度指标的评估集是自建的数据集,包含500张圆形印章图像。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -180,7 +177,7 @@ python main.py -c paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.y
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_server_seal_det.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_server_seal_det.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -191,8 +188,7 @@ python main.py -c paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.y
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 7
docs/module_usage/tutorials/ocr_modules/seal_text_detection_en.md

@@ -7,8 +7,6 @@ The seal text detection module typically outputs multi-point bounding boxes arou
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 |Model Name| Hmean(%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)| Description |
 |-|-|-|-|-|-|
@@ -18,7 +16,6 @@ The seal text detection module typically outputs multi-point bounding boxes arou
 
 **Note: The evaluation set for the above accuracy metrics is a self-built dataset containing 500 circular seal images. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -193,7 +190,7 @@ python main.py -c paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.y
 
 You need to follow these steps:
 
-* Specify the `.yaml` configuration file path for the model (here it's `PP-OCRv4_server_seal_det.yaml`).
+* Specify the `.yaml` configuration file path for the model (here it's `PP-OCRv4_server_seal_det.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)).
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 
@@ -205,9 +202,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
 * PaddleX abstracts the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-
-After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, including whether the training task completed successfully, produced weight metrics, and related file paths.
 * `train.log`: Training log file, recording model metric changes, loss changes, etc.

+ 2 - 6
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 <table>
   <tr>
     <th>模型</th>
@@ -40,7 +38,6 @@
 
 **注:以上精度指标测量自PaddleX 内部自建英文表格识别数据集。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -198,7 +195,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`SLANet.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`SLANet.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -208,8 +205,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 5
docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md

@@ -7,8 +7,6 @@ Table structure recognition is a crucial component in table recognition systems,
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 <table>
   <tr>
@@ -42,7 +40,6 @@ SLANet_plus is an enhanced version of SLANet, a table structure recognition mode
 
 **Note: The above accuracy metrics are evaluated on a self-built English table recognition dataset by PaddleX. All GPU inference times are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -201,7 +198,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 ```
 the following steps are required:
 
-* Specify the path of the model's `.yaml` configuration file (here it is `SLANet.yaml`)
+* Specify the path of the model's `.yaml` configuration file (here it is `SLANet.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path of the training dataset: `-o Global.dataset_dir`. Other related parameters can be set by modifying the fields under `Global` and `Train` in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first 2 GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file parameter instructions for the corresponding task module of the model [PaddleX Common Model Configuration File Parameters](../../instructions/config_parameters_common_en.md).
 
@@ -211,7 +208,7 @@ the following steps are required:
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 6
docs/module_usage/tutorials/ocr_modules/text_detection.md

@@ -7,15 +7,12 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
 |PP-OCRv4_server_det|82.69|83.3501|2434.01|109|PP-OCRv4 的服务端文本检测模型,精度更高,适合在性能较好的服务器上部署|
 |PP-OCRv4_mobile_det|77.79|10.6923|120.177|4.7|PP-OCRv4 的移动端文本检测模型,效率更高,适合在端侧设备部署|
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)。
@@ -165,7 +162,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_mobile_det.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_mobile_det.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明 [PaddleX通用模型配置文件参数说明](../../../module_usage/instructions/config_parameters_common.md)。
@@ -175,8 +172,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅 [PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 36 - 4
docs/module_usage/tutorials/ocr_modules/text_detection_en.md

@@ -159,7 +159,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 ```
 The steps required are:
 
-* Specify the path to the model's `.yaml` configuration file (here it's `PP-OCRv4_mobile_det.yaml`)
+* Specify the path to the model's `.yaml` configuration file (here it's `PP-OCRv4_mobile_det.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters Documentation](../../../module_usage/instructions/config_parameters_common_en.md).
@@ -167,9 +167,41 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 <details>
   <summary>👉 <b>More Information (Click to Expand)</b></summary>
 
-* During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
-* PaddleX abstracts away the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)
+* During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
+* PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+
+* `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
+* `train.log`: Training log file, recording changes in model metrics and loss during training;
+* `config.yaml`: Training configuration file, recording the hyperparameter configuration for this training session;
+* `.pdparams`, `.pdema`, `.pdopt.pdstate`, `.pdiparams`, `.pdmodel`: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;
+</details>
+
+### **4.3 Model Evaluation**
+
+After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
+
+```bash
+python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+    -o Global.mode=evaluate \
+    -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
+```
+
+Similar to model training, the following steps are required:
+
+* Specify the path to the model's `.yaml` configuration file (in this case, `PP-OCRv4_mobile_det.yaml`)
+* Specify the mode as model evaluation: `-o Global.mode=evaluate`
+* Specify the path to the validation dataset: `-o Global.dataset_dir`
+
+Other related parameters can be set by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For details, please refer to [PaddleX General Model Configuration File Parameter Instructions](../../../module_usage/instructions/config_parameters_common.md).
+
+<details>
+  <summary>👉 <b>More Instructions (Click to Expand)</b></summary>
+
+During model evaluation, you need to specify the path to the model weight file. Each configuration file has a built-in default weight save path. If you need to change it, you can set it by adding a command line argument, such as `-o Evaluate.weight_path=./output/best_accuracy/best_accuracy.pdparams`.
+
+After completing the model evaluation, an `evaluate_result.json` will be generated, which records the evaluation results. Specifically, it records whether the evaluation task was completed successfully and the model's evaluation metrics, including `precision`, `recall`, and `hmean`.
+
 </details>
 
 ### **4.4 Model Inference and Model Integration**

+ 0 - 3
docs/module_usage/tutorials/ocr_modules/text_image_unwarping.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 
 |模型|MS-SSIM (%)|模型存储大小(M)|介绍|
@@ -18,7 +16,6 @@
 
 **模型的精度指标测量自 [DocUNet benchmark](https://www3.cs.stonybrook.edu/~cvl/docunet.html)。**
 
-</details>
 
 ## 三、快速集成
 在快速集成前,首先需要安装PaddleX的wheel包,wheel的安装方式请参考 [PaddleX本地安装教程](../../../installation/installation.md)。完成wheel包的安装后,几行代码即可完成文本检测模块的推理,可以任意切换该模块下的模型,您也可以将文本检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/doc_test.jpg)到本地。

+ 32 - 3
docs/module_usage/tutorials/ocr_modules/text_recognition.md

@@ -7,6 +7,36 @@
 
 ## 二、支持模型列表
 
+<table >
+    <tr>
+        <th>模型</th>
+        <th>识别 Avg Accuracy(%)</th>
+        <th>GPU推理耗时(ms)</th>
+        <th>CPU推理耗时 (ms)</th>
+        <th>模型存储大小(M)</th>
+        <th>介绍</th>
+    </tr>
+    <tr>
+        <td>PP-OCRv4_mobile_rec</td>
+        <td>78.20</td>
+        <td>7.95018</td>
+        <td>46.7868</td>
+        <td>10.6 M</td>
+        <td rowspan="2">PP-OCRv4是百度飞桨视觉团队自研的文本识别模型PP-OCRv3的下一个版本,通过引入数据增强方案、GTC-NRTR指导分支等策略,在模型推理速度不变的情况下,进一步提升了文本识别精度。该模型提供了服务端(server)和移动端(mobile)两个不同版本,来满足不同场景下的工业需求。</td>
+    </tr>
+    <tr>
+        <td>PP-OCRv4_server_rec </td>
+        <td>79.20</td>
+        <td>7.19439</td>
+        <td>140.179</td>
+        <td>71.2 M</td>
+    </tr>
+</table>
+
+**注:以上精度指标的评估集是 PaddleOCR 自建的中文数据集,覆盖街景、网图、文档、手写多个场景,其中文本识别包含 1.1w 张图片。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
+
+> ❗ 以上列出的是目标检测模块重点支持的**2个核心模型**,该模块总共支持**4个模型**,完整的模型列表如下:
+
 <details>
    <summary> 👉模型列表详情</summary>
 
@@ -226,7 +256,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_mobile_rec.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_mobile_rec.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -239,8 +269,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 32 - 2
docs/module_usage/tutorials/ocr_modules/text_recognition_en.md

@@ -7,6 +7,36 @@ The text recognition module is the core component of an OCR (Optical Character R
 
 ## II. Supported Model List
 
+<table>
+  <tr>
+    <th>Model</th>
+    <th>Recognition Avg Accuracy(%)</th>
+    <th>GPU Inference Time (ms)</th>
+    <th>CPU Inference Time (ms)</th>
+    <th>Model Size (M)</th>
+    <th>Description</th>
+  </tr>
+   <tr>
+        <td>PP-OCRv4_mobile_rec</td>
+        <td>78.20</td>
+        <td>7.95018</td>
+        <td>46.7868</td>
+        <td>10.6 M</td>
+        <td rowspan="2">PP-OCRv4, developed by Baidu's PaddlePaddle Vision Team, is the next version of the PP-OCRv3 text recognition model. By introducing data augmentation schemes, GTC-NRTR guidance branches, and other strategies, it further improves text recognition accuracy without compromising model inference speed. The model offers both server and mobile versions to meet industrial needs in different scenarios.</td>
+    </tr>
+    <tr>
+        <td>PP-OCRv4_server_rec </td>
+        <td>79.20</td>
+        <td>7.19439</td>
+        <td>140.179</td>
+        <td>71.2 M</td>
+    </tr>
+</table>
+
+**Note: The evaluation set for the above accuracy metrics is PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, handwriting, and more, with 1.1w images for text recognition. GPU inference time for all models is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
+
+> ❗ The above list features the **2 core models** that the image classification module primarily supports. In total, this module supports **4 models**. The complete list of models is as follows:
+
 <details>
    <summary> 👉Model List Details</summary>
 
@@ -228,7 +258,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 ```
 The steps required are:
 
-* Specify the path to the model's `.yaml` configuration file (here it's `PP-OCRv4_mobile_rec.yaml`)
+* Specify the path to the model's `.yaml` configuration file (here it's `PP-OCRv4_mobile_rec.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the path to the training dataset: `-o Global.dataset_dir`.
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file or adjusted by appending parameters in the command line. For example, to specify training on the first 2 GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration File Parameters](../../instructions/config_parameters_common_en.md).
@@ -240,7 +270,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 6
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型名称|precison|recall|f1_score|模型存储大小(M)|介绍|
 |-|-|-|-|-|-|
@@ -20,7 +18,6 @@
 
 **注:以上精度指标测量自**PSM**数据集,时序长度为100。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -224,7 +221,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`AutoEncoder_ad.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`AutoEncoder_ad.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
@@ -235,8 +232,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 7
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md

@@ -7,8 +7,6 @@ Time series anomaly detection focuses on identifying abnormal points or periods
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 | Model Name | Precision | Recall | F1-Score | Model Size (M) | Description |
 |-|-|-|-|-|-|
@@ -20,7 +18,6 @@ Time series anomaly detection focuses on identifying abnormal points or periods
 
 **Note: The above accuracy metrics are measured on the PSM dataset with a time series length of 100.**
 
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For details, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -226,7 +223,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
 
 You need to follow these steps:
 
-* Specify the `.yaml` configuration file path for the model (here it's `AutoEncoder_ad.yaml`).
+* Specify the `.yaml` configuration file path for the model (here it's `AutoEncoder_ad.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)).
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 
@@ -237,9 +234,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
 * PaddleX abstracts the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-
-After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, including whether the training task completed successfully, produced weight metrics, and related file paths.
 * `train.log`: Training log file, recording model metric changes, loss changes, etc.

+ 2 - 6
docs/module_usage/tutorials/time_series_modules/time_series_classification.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型名称|acc(%)|模型存储大小(M)|介绍|
 |-|-|-|-|
@@ -16,7 +14,6 @@
 
 **注:以上精度指标的评估集是 UWaveGestureLibrary。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -231,7 +228,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`TimesNet_cls.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`TimesNet_cls.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
@@ -242,8 +239,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 7
docs/module_usage/tutorials/time_series_modules/time_series_classification_en.md

@@ -7,8 +7,6 @@ Time series classification involves identifying and categorizing different patte
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 |Model Name|Acc(%)|Model Size (M)|Description|
 |-|-|-|-|
@@ -16,7 +14,6 @@ Time series classification involves identifying and categorizing different patte
 
 **Note: The evaluation set for the above accuracy metrics is UWaveGestureLibrary.**
 
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -240,7 +237,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 
 You need to follow these steps:
 
-* Specify the `.yaml` configuration file path for the model (here it's `TimesNet_cls.yaml`).
+* Specify the `.yaml` configuration file path for the model (here it's `TimesNet_cls.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)).
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 
@@ -251,9 +248,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
 * PaddleX abstracts the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-
-After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, including whether the training task completed successfully, produced weight metrics, and related file paths.
 * `train.log`: Training log file, recording model metric changes, loss changes, etc.

+ 2 - 6
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md

@@ -7,8 +7,6 @@
 
 ## 二、支持模型列表
 
-<details>
-   <summary> 👉模型列表详情</summary>
 
 |模型名称|mse|mae|模型存储大小(M)|介绍|
 |-|-|-|-|-|
@@ -20,7 +18,6 @@
 
 **注:以上精度指标测量自**[ETTH1](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/Etth1.tar)**测试数据集,输入序列长度为96,预测序列长度除 TiDE 外为96,TiDE为720 。**
 
-</details>
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
@@ -256,7 +253,7 @@ python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`DLinear.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`DLinear.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
@@ -268,8 +265,7 @@ python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
 
 * 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段进行设置。
 * PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
-* 训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md)。
-在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+* 在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
 
 * `train_result.json`:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
 * `train.log`:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;

+ 2 - 7
docs/module_usage/tutorials/time_series_modules/time_series_forecasting_en.md

@@ -7,8 +7,6 @@ Time series forecasting aims to predict the possible values or states at a futur
 
 ## II. Supported Model List
 
-<details>
-   <summary> 👉 Model List Details</summary>
 
 |Model Name| mse | mae |Model Size (M)| Introduce |
 |-|-|-|-|-|
@@ -22,7 +20,6 @@ Time series forecasting aims to predict the possible values or states at a futur
 **Note: The above accuracy metrics are measured on the [ETTH1](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/Etth1.tar) test dataset, with an input sequence length of 96, and a prediction sequence length of 96 for all models except TiDE, which has a prediction sequence length of 720.**
 
 
-</details>
 
 ## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
@@ -270,7 +267,7 @@ python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
 
 You need to follow these steps:
 
-* Specify the `.yaml` configuration file path for the model (here it's `DLinear.yaml`).
+* Specify the `.yaml` configuration file path for the model (here it's `DLinear.yaml`,When training other models, you need to specify the corresponding configuration files. The relationship between the model and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md)).
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 
@@ -281,9 +278,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 
 * During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
 * PaddleX abstracts the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-
-After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, including whether the training task completed successfully, produced weight metrics, and related file paths.
 * `train.log`: Training log file, recording model metric changes, loss changes, etc.

+ 1 - 1
docs/pipeline_deploy/edge_deploy_en.md

@@ -1,4 +1,4 @@
-[简体中文](lite_deploy.md) | English
+[简体中文](edge_deploy.md) | English
 
 # PaddleX Edge Deployment Demo Usage Guide
 

+ 2 - 2
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing_en.md

@@ -318,7 +318,7 @@ To directly apply the pipeline in your Python project, refer to the example code
 
 Additionally, PaddleX offers three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Inference**: In production environments, many applications require stringent performance metrics, especially response speed, to ensure efficient operation and smooth user experience. PaddleX provides a high-performance inference plugin that deeply optimizes model inference and pre/post-processing for significant end-to-end speedups. For detailed instructions on high-performance inference, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy.md).
+🚀 **High-Performance Inference**: In production environments, many applications require stringent performance metrics, especially response speed, to ensure efficient operation and smooth user experience. PaddleX provides a high-performance inference plugin that deeply optimizes model inference and pre/post-processing for significant end-to-end speedups. For detailed instructions on high-performance inference, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference_en.md).
 
 ☁️ **Service Deployment**: Service deployment is a common form in production environments, where reasoning functions are encapsulated as services accessible via network requests. PaddleX enables cost-effective service deployment of pipelines. For detailed instructions on service deployment, refer to the [PaddleX Service Deployment Guide](../../../pipeline_deploy/service_deploy.md).
 
@@ -440,7 +440,7 @@ for res in result["layoutParsingResults"]:
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment refers to placing computational and data processing capabilities directly on user devices, enabling them to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
+📱 **Edge Deployment**: Edge deployment refers to placing computational and data processing capabilities directly on user devices, enabling them to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy_en.md).
 
 You can choose an appropriate method to deploy your model pipeline based on your needs, and proceed with subsequent AI application integration.
 

+ 2 - 2
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md

@@ -127,7 +127,7 @@ In the above Python script, the following steps are executed:
 |---------------|-----------------------------------------------------------------------------------------------------------|
 | Python Var    | Supports directly passing in Python variables, such as numpy.ndarray representing image data. |
 | str         | Supports passing in the path of the file to be predicted, such as the local path of an image file: `/root/data/img.jpg`. |
-| str           | Supports passing in the URL of the file to be predicted, such as the network URL of an image file: [Example](ttps://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_fc.csv). |
+| str           | Supports passing in the URL of the file to be predicted, such as the network URL of an image file: [Example](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_fc.csv). |
 | str           | Supports passing in a local directory, which should contain files to be predicted, such as the local path: `/root/data/`. |
 | dict          | Supports passing in a dictionary type, where the key needs to correspond to a specific task, such as "img" for image classification tasks. The value of the dictionary supports the above types of data, for example: `{"img": "/root/data1"}`. |
 | list          | Supports passing in a list, where the list elements need to be of the above types of data, such as `[numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"], [{"img": "/root/data1"}, {"img": "/root/data2/img.jpg"}]`. |
@@ -577,7 +577,7 @@ Choose the appropriate deployment method for your model pipeline based on your n
 If the default model weights provided by the General Time Series Forecasting Pipeline do not meet your requirements in terms of accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the pipeline in your scenario.
 
 #### 4.1 Model Fine-tuning
-Since the General Time Series Forecasting Pipeline includes a time series forecasting module, if the performance of the pipeline does not meet expectations, you need to refer to the [Customization](../../../module_usage/tutorials/time_series_modules/time_series_forecast_en.md#iv-custom-development) section in the [Time Series Forecasting Module Development Tutorial](../../../module_usage/tutorials/time_series_modules/time_series_forecast_en.md) and use your private dataset to fine-tune the time series forecasting model.
+Since the General Time Series Forecasting Pipeline includes a time series forecasting module, if the performance of the pipeline does not meet expectations, you need to refer to the [Customization](../../../module_usage/tutorials/time_series_modules/time_series_forecast_en.md#iv-custom-development) section in the [Time Series Forecasting Module Development Tutorial](../../../module_usage/tutorials/time_series_modules/time_series_forecasting_en.md) and use your private dataset to fine-tune the time series forecasting model.
 
 #### 4.2 Model Application
 After fine-tuning with your private dataset, you will obtain local model weight files.

+ 2 - 2
docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial_en.md

@@ -77,7 +77,7 @@ Through the online experience of the document scene information extraction, a Ba
 
 ### 2.2 Online Experience
 
-You can experience the effectiveness of the Document Scene Information Extraction v3 pipeline on the **AIStudio Community**. Click the link to download the [Test Paper Document File](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg), and then upload it to the [official Document Scene Information Extraction v3 application]((https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter)) to experience the extraction results. The process is as follows:
+You can experience the effectiveness of the Document Scene Information Extraction v3 pipeline on the **AIStudio Community**. Click the link to download the [Test Paper Document File](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg), and then upload it to the [official Document Scene Information Extraction v3 application](https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter) to experience the extraction results. The process is as follows:
 
 ![](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/06.png)
 
@@ -318,7 +318,7 @@ By following the above steps, prediction results can be generated under the ./ou
 
 ## 6. Pipeline Inference
 
-Replace the model in the production line with the fine-tuned model for testing, and use the academic paper literature [test file]((https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg)) to perform predictions.
+Replace the model in the production line with the fine-tuned model for testing, and use the academic paper literature [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg) to perform predictions.
 
 
 First, obtain and update the configuration file for the Document Information Extraction v3. Execute the following command to retrieve the configuration file (assuming a custom save location of `./my_path`):

+ 2 - 2
docs/practical_tutorials/ts_anomaly_detection_en.md

@@ -39,7 +39,7 @@ PaddleX provides five end-to-end time series anomaly detection models. For detai
 
 To demonstrate the entire process of time series anomaly detection, we will use the publicly available MSL (Mars Science Laboratory) dataset for model training and validation. The PSM (Planetary Science Mission) dataset, sourced from NASA, comprises 55 dimensions and includes telemetry anomaly data reported by the spacecraft's monitoring system for unexpected event anomalies (ISA). With its practical application background, it better reflects real-world anomaly scenarios and is commonly used to test and validate the performance of time series anomaly detection models. This tutorial will perform anomaly detection based on this dataset.
 
-We have converted the dataset into a standard data format, and you can obtain a sample dataset using the following command. For an introduction to the data format, please refer to the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md).
+We have converted the dataset into a standard data format, and you can obtain a sample dataset using the following command. For an introduction to the data format, please refer to the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md).
 
 
 You can use the following commands to download the demo dataset to a specified folder:
@@ -102,7 +102,7 @@ The above verification results have omitted some data parts. `check_pass` being
 **Note**: Only data that passes the verification can be used for training and evaluation.
 
 ### 4.3 Dataset Format Conversion/Dataset Splitting (Optional)
-If you need to convert the dataset format or re-split the dataset, refer to Section 4.1.3 in the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md).
+If you need to convert the dataset format or re-split the dataset, refer to Section 4.1.3 in the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md).
 
 ## 5. Model Training and Evaluation
 ### 5.1 Model Training

+ 2 - 2
docs/practical_tutorials/ts_classification_en.md

@@ -36,7 +36,7 @@ PaddleX provides a time series classification model. Refer to the [Model List](.
 ### 4.1 Data Preparation
 To demonstrate the entire time series classification process, we will use the public [Heartbeat Dataset](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ts_classify_examples.tar) for model training and validation. The Heartbeat Dataset is part of the UEA Time Series Classification Archive, addressing the practical task of heartbeat monitoring for medical diagnosis. The dataset comprises multiple time series groups, with each data point consisting of a label variable, group ID, and 61 feature variables. This dataset is commonly used to test and validate the performance of time series classification prediction models.
 
-We have converted the dataset into a standard format, which can be obtained using the following commands. For data format details, refer to the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_classification_en.md).
+We have converted the dataset into a standard format, which can be obtained using the following commands. For data format details, refer to the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_classification_en.md).
 
 Dataset Acquisition Command:
 
@@ -97,7 +97,7 @@ The above verification results have omitted some data parts. `check_pass` being
 **Note**: Only data that passes the verification can be used for training and evaluation.
 
 ### 4.3 Dataset Format Conversion / Dataset Splitting (Optional)
-If you need to convert the dataset format or re-split the dataset, please refer to Section 4.1.3 in the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_classification_en.md).
+If you need to convert the dataset format or re-split the dataset, please refer to Section 4.1.3 in the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_classification_en.md).
 
 ## 5. Model Training and Evaluation
 

+ 2 - 2
docs/practical_tutorials/ts_forecast_en.md

@@ -42,7 +42,7 @@ Based on your actual usage scenario, select an appropriate model for training. A
 ### 4.1 Data Preparation
 To demonstrate the entire time series forecasting process, we will use the [Electricity](https://archive.ics.uci.edu/dataset/321/electricityloaddiagrams20112014) dataset for model training and validation. This dataset collects electricity consumption at a certain node from 2012 to 2014, with data collected every hour. Each data point consists of the current timestamp and corresponding electricity consumption. This dataset is commonly used to test and validate the performance of time series forecasting models.
 
-In this tutorial, we will use this dataset to predict the electricity consumption for the next 96 hours. We have already converted this dataset into a standard data format, and you can obtain a sample dataset by running the following command. For an introduction to the data format, you can refer to the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_forecast_en.md).
+In this tutorial, we will use this dataset to predict the electricity consumption for the next 96 hours. We have already converted this dataset into a standard data format, and you can obtain a sample dataset by running the following command. For an introduction to the data format, you can refer to the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_forecasting_en.md).
 
 
 You can use the following commands to download the demo dataset to a specified folder:
@@ -190,7 +190,7 @@ The above verification results have omitted some data parts. `check_pass` being
 **Note**: Only data that passes the verification can be used for training and evaluation.
 
 ### 4.3 Dataset Format Conversion/Dataset Splitting (Optional)
-If you need to convert the dataset format or re-split the dataset, you can modify the configuration file or append hyperparameters for settings. Refer to Section 4.1.3 in the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_forecast_en.md).
+If you need to convert the dataset format or re-split the dataset, you can modify the configuration file or append hyperparameters for settings. Refer to Section 4.1.3 in the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_forecasting_en.md).
 
 ## 5. Model Training and Evaluation
 

+ 1 - 1
docs/support_list/models_list_en.md

@@ -343,7 +343,7 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
 
 **Note: The evaluation set for the above accuracy metrics is the ****PaddleX self-built Layout Detection Dataset****, containing 10,000 images.**
 
-## [Time Series Forecasting Module](../module_usage/tutorials/time_series_modules/time_series_forecast_en.md)
+## [Time Series Forecasting Module](../module_usage/tutorials/time_series_modules/time_series_forecasting_en.md)
 |Model Name|mse|mae|Model Size|YAML File|
 |-|-|-|-|-|
 |DLinear|0.382|0.394|72 K|[DLinear.yaml](../../paddlex/configs/ts_forecast/DLinear.yaml)|