Преглед на файлове

replace train config path for all docs (#2937)

cuicheng01 преди 10 месеца
родител
ревизия
8922f898dc
променени са 55 файла, в които са добавени 361 реда и са изтрити 361 реда
  1. 4 4
      docs/module_usage/tutorials/cv_modules/anomaly_detection.en.md
  2. 4 4
      docs/module_usage/tutorials/cv_modules/anomaly_detection.md
  3. 6 6
      docs/module_usage/tutorials/cv_modules/face_detection.en.md
  4. 6 6
      docs/module_usage/tutorials/cv_modules/face_detection.md
  5. 4 4
      docs/module_usage/tutorials/cv_modules/face_feature.en.md
  6. 4 4
      docs/module_usage/tutorials/cv_modules/face_feature.md
  7. 6 6
      docs/module_usage/tutorials/cv_modules/human_detection.en.md
  8. 6 6
      docs/module_usage/tutorials/cv_modules/human_detection.md
  9. 6 6
      docs/module_usage/tutorials/cv_modules/human_keypoint_detection.md
  10. 6 6
      docs/module_usage/tutorials/cv_modules/image_classification.en.md
  11. 6 6
      docs/module_usage/tutorials/cv_modules/image_classification.md
  12. 8 8
      docs/module_usage/tutorials/cv_modules/image_feature.en.md
  13. 8 8
      docs/module_usage/tutorials/cv_modules/image_feature.md
  14. 8 8
      docs/module_usage/tutorials/cv_modules/image_multilabel_classification.en.md
  15. 8 8
      docs/module_usage/tutorials/cv_modules/image_multilabel_classification.md
  16. 8 8
      docs/module_usage/tutorials/cv_modules/instance_segmentation.en.md
  17. 8 8
      docs/module_usage/tutorials/cv_modules/instance_segmentation.md
  18. 6 6
      docs/module_usage/tutorials/cv_modules/mainbody_detection.en.md
  19. 6 6
      docs/module_usage/tutorials/cv_modules/mainbody_detection.md
  20. 8 8
      docs/module_usage/tutorials/cv_modules/object_detection.en.md
  21. 8 8
      docs/module_usage/tutorials/cv_modules/object_detection.md
  22. 6 6
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.en.md
  23. 6 6
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md
  24. 6 6
      docs/module_usage/tutorials/cv_modules/rotated_object_detection.en.md
  25. 6 6
      docs/module_usage/tutorials/cv_modules/rotated_object_detection.md
  26. 9 9
      docs/module_usage/tutorials/cv_modules/semantic_segmentation.en.md
  27. 9 9
      docs/module_usage/tutorials/cv_modules/semantic_segmentation.md
  28. 8 8
      docs/module_usage/tutorials/cv_modules/small_object_detection.en.md
  29. 8 8
      docs/module_usage/tutorials/cv_modules/small_object_detection.md
  30. 6 6
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.en.md
  31. 6 6
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md
  32. 6 6
      docs/module_usage/tutorials/cv_modules/vehicle_detection.en.md
  33. 6 6
      docs/module_usage/tutorials/cv_modules/vehicle_detection.md
  34. 6 6
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.en.md
  35. 6 6
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md
  36. 6 6
      docs/module_usage/tutorials/ocr_modules/layout_detection.en.md
  37. 6 6
      docs/module_usage/tutorials/ocr_modules/layout_detection.md
  38. 6 6
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.en.md
  39. 6 6
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.md
  40. 6 6
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.en.md
  41. 6 6
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md
  42. 6 6
      docs/module_usage/tutorials/ocr_modules/text_detection.en.md
  43. 6 6
      docs/module_usage/tutorials/ocr_modules/text_detection.md
  44. 6 6
      docs/module_usage/tutorials/ocr_modules/text_recognition.en.md
  45. 6 6
      docs/module_usage/tutorials/ocr_modules/text_recognition.md
  46. 6 6
      docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.en.md
  47. 6 6
      docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.md
  48. 8 8
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.en.md
  49. 9 9
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md
  50. 8 8
      docs/module_usage/tutorials/time_series_modules/time_series_classification.en.md
  51. 8 8
      docs/module_usage/tutorials/time_series_modules/time_series_classification.md
  52. 9 9
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.en.md
  53. 9 9
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md
  54. 6 6
      docs/module_usage/tutorials/video_modules/video_classification.en.md
  55. 4 4
      docs/module_usage/tutorials/video_modules/video_detection.en.md

+ 4 - 4
docs/module_usage/tutorials/cv_modules/anomaly_detection.en.md

@@ -68,7 +68,7 @@ tar -xf ./dataset/mvtec_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mvtec_examples
 ```
@@ -118,7 +118,7 @@ After executing the above command, PaddleX will validate the dataset and collect
 A single command is sufficient to complete model training, taking the training of STFPM as an example:
 
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/mvtec_examples
 ```
@@ -150,7 +150,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/mvtec_examples
 ```
@@ -172,7 +172,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png) to your local machine.
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="uad_grid.png"

+ 4 - 4
docs/module_usage/tutorials/cv_modules/anomaly_detection.md

@@ -237,7 +237,7 @@ tar -xf ./dataset/mvtec_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mvtec_examples
 ```
@@ -284,7 +284,7 @@ python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
 一条命令即可完成模型的训练,以此处STFPM的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/mvtec_examples
 ```
@@ -315,7 +315,7 @@ python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/mvtec_examples
 ```
@@ -337,7 +337,7 @@ python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令,运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png)到本地。
 ```bash
-python main.py -c paddlex/configs/image_anomaly_detection/STFPM.yaml \
+python main.py -c paddlex/configs/modules/image_anomaly_detection/STFPM.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="uad_grid.png"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/face_detection.en.md

@@ -97,7 +97,7 @@ tar -xf ./dataset/widerface_coco_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 ```
@@ -171,13 +171,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerface_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -190,7 +190,7 @@ CheckDataset:
 A single command is sufficient to complete model training, taking the training of PicoDet_LCNet_x2_5_face as an example:
 
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 ```
@@ -222,7 +222,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 ```
@@ -244,7 +244,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_detection.png) to your local machine.
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="face_detection.png"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/face_detection.md

@@ -295,7 +295,7 @@ tar -xf ./dataset/widerface_coco_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 ```
@@ -367,13 +367,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerface_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -385,7 +385,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处PicoDet_LCNet_x2_5_face的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 ```
@@ -416,7 +416,7 @@ python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/widerface_coco_examples
 ```
@@ -438,7 +438,7 @@ python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令,运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_detection.png)到本地。
 ```bash
-python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
+python main.py -c paddlex/configs/modules/face_detection/PicoDet_LCNet_x2_5_face.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="face_detection.png"

+ 4 - 4
docs/module_usage/tutorials/cv_modules/face_feature.en.md

@@ -84,7 +84,7 @@ tar -xf ./dataset/face_rec_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/face_rec_examples
 ```
@@ -186,7 +186,7 @@ images/Miyako_Miyazaki_0002.jpg images/Munir_Akram_0002.jpg 0
 Model training can be completed with a single command. Here is an example of training MobileFaceNet:
 
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/face_rec_examples
 ```
@@ -215,7 +215,7 @@ After completing model training, all outputs are saved in the specified output d
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 
-<pre><code class="language-bash">python main.py -c paddlex/configs/face_detection/MobileFaceNet.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/face_detection/MobileFaceNet.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/face_rec_examples
 </code></pre>
@@ -240,7 +240,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference predictions through the command line, you only need the following command. Before running the following code, please download the [example image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_recognition_001.jpg) to your local machine.
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="face_recognition_001.jpg"

+ 4 - 4
docs/module_usage/tutorials/cv_modules/face_feature.md

@@ -241,7 +241,7 @@ tar -xf ./dataset/face_rec_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/face_rec_examples
 ```
@@ -342,7 +342,7 @@ images/Miyako_Miyazaki_0002.jpg images/Munir_Akram_0002.jpg 0
 一条命令即可完成模型的训练,以此处MobileFaceNet的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/face_rec_examples
 ```
@@ -374,7 +374,7 @@ python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/face_rec_examples
 ```
@@ -396,7 +396,7 @@ python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令,运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_recognition_001.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/face_feature/MobileFaceNet.yaml \
+python main.py -c paddlex/configs/modules/face_feature/MobileFaceNet.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="face_recognition_001.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/human_detection.en.md

@@ -85,7 +85,7 @@ tar -xf ./dataset/widerperson_coco_examples.tar -C ./dataset/
 You can complete data validation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 ```
@@ -157,13 +157,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in their original paths.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -175,7 +175,7 @@ CheckDataset:
 Model training can be completed with a single command, taking the training of `PP-YOLOE-S_human` as an example:
 
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 ```
@@ -206,7 +206,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 ```
@@ -228,7 +228,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/human_detection.jpg) to your local machine.
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="human_detection.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/human_detection.md

@@ -267,7 +267,7 @@ tar -xf ./dataset/widerperson_coco_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 ```
@@ -339,13 +339,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -359,7 +359,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处`PP-YOLOE-S_human`的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 ```
@@ -390,7 +390,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/widerperson_coco_examples
 ```
@@ -412,7 +412,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/human_detection.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
+python main.py -c paddlex/configs/modules/human_detection/PP-YOLOE-S_human.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="human_detection.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/human_keypoint_detection.md

@@ -261,7 +261,7 @@ tar -xf ./dataset/keypoint_coco_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
+python main.py -c paddlex/configs/modules/keypoint_detection/PP-TinyPose_128x96.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/keypoint_coco_examples
 ```
@@ -353,7 +353,7 @@ CheckDataset:
 随后执行命令:
 
 ```bash
-python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
+python main.py -c paddlex/configs/modules/keypoint_detection/PP-TinyPose_128x96.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/keypoint_coco_examples
 ```
@@ -362,7 +362,7 @@ python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
 以上参数同样支持通过追加命令行参数的方式进行设置:
 
 ```bash
-python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml  \
+python main.py -c paddlex/configs/modules/keypoint_detection/PP-TinyPose_128x96.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/keypoint_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -377,7 +377,7 @@ python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml  \
 一条命令即可完成模型的训练,以此处`PP-TinyPose_128x96`的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
+python main.py -c paddlex/configs/modules/keypoint_detection/PP-TinyPose_128x96.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/keypoint_coco_examples
 ```
@@ -406,7 +406,7 @@ python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
+python main.py -c paddlex/configs/modules/keypoint_detection/PP-TinyPose_128x96.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/keypoint_coco_examples
 ```
@@ -433,7 +433,7 @@ python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/keypoint_detection_002.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
+python main.py -c paddlex/configs/modules/keypoint_detection/PP-TinyPose_128x96.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="keypoint_detection_002.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/image_classification.en.md

@@ -714,7 +714,7 @@ tar -xf ./dataset/cls_flowers_examples.tar -C ./dataset/
 One command is all you need to complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 ```
@@ -784,13 +784,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>These parameters also support being set through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/cls_flowers_examples \
     -o CheckDataset.split.enable=True \
@@ -801,7 +801,7 @@ CheckDataset:
 ### 4.2 Model Training
 A single command can complete the model training. Taking the training of the image classification model PP-LCNet_x1_0 as an example:
 ```
-python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
+python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml  \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 ```
@@ -832,7 +832,7 @@ the following steps are required:
 ## <b>4.3 Model Evaluation</b>
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model accuracy. Using PaddleX for model evaluation, a single command can complete the model evaluation:
 ```bash
-python main.py -c  paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
+python main.py -c  paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml  \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 ```
@@ -854,7 +854,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_image_classification_001.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/image_classification.md

@@ -899,7 +899,7 @@ tar -xf ./dataset/cls_flowers_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 ```
@@ -969,13 +969,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/cls_flowers_examples \
     -o CheckDataset.split.enable=True \
@@ -987,7 +987,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处图像分类模型 PP-LCNet_x1_0 的训练为例:
 
 ```
-python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
+python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml  \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 ```
@@ -1018,7 +1018,7 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c  paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
+python main.py -c  paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml  \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/cls_flowers_examples
 ```
@@ -1042,7 +1042,7 @@ python main.py -c  paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg)到本地。
 
 ```bash
-python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
+python main.py -c paddlex/configs/modules/image_classification/PP-LCNet_x1_0.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_image_classification_001.jpg"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/image_feature.en.md

@@ -76,7 +76,7 @@ tar -xf ./dataset/Inshop_examples.tar -C ./dataset/
 #### 4.1.2 Data Validation
 A single command can complete data validation:
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/Inshop_examples
 ```
@@ -174,13 +174,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/image_classification_labelme_examples
 </code></pre>
 <p>After the data conversion is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support being set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/image_classification_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -206,13 +206,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/Inshop_examples
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support being set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/Inshop_examples \
     -o CheckDataset.split.enable=True \
@@ -228,7 +228,7 @@ CheckDataset:
 Model training can be completed with a single command, taking the training of the image feature model PP-ShiTuV2_rec as an example:
 
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/Inshop_examples
 ```
@@ -259,7 +259,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/Inshop_examples
 ```
@@ -283,7 +283,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_recognition_001.jpg) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_image_recognition_001.jpg"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/image_feature.md

@@ -228,7 +228,7 @@ tar -xf ./dataset/Inshop_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/Inshop_examples
 ```
@@ -328,13 +328,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c  paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+<pre><code class="language-bash">python main.py -c  paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/image_classification_labelme_examples
 </code></pre>
 <p>数据转换执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/image_classification_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -360,13 +360,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/Inshop_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/Inshop_examples \
     -o CheckDataset.split.enable=True \
@@ -382,7 +382,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处图像特征模型 PP-ShiTuV2_rec 的训练为例:
 
 ```
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/Inshop_examples
 ```
@@ -413,7 +413,7 @@ python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/Inshop_examples
 ```
@@ -436,7 +436,7 @@ python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_recognition_001.jpg)到本地。
 
 ```bash
-python main.py -c paddlex/configs/image_feature/PP-ShiTuV2_rec.yaml  \
+python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_image_recognition_001.jpg"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/image_multilabel_classification.en.md

@@ -86,7 +86,7 @@ tar -xf ./dataset/mlcls_nus_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mlcls_nus_examples
 ```
@@ -175,13 +175,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 </code></pre>
 <p>After the data conversion is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support being set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.convert.enable=True \
@@ -207,13 +207,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>These parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -224,7 +224,7 @@ CheckDataset:
 ### 4.2 Model Training
 A single command can complete the model training. Taking the training of the image multi-label classification model PP-LCNet_x1_0_ML as an example:
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/mlcls_nus_examples
 ```
@@ -257,7 +257,7 @@ the following steps are required:
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/mlcls_nus_examples
 ```
@@ -280,7 +280,7 @@ After completing model training and evaluation, you can use the trained model we
 * Inference predictions can be performed through the command line with just one command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/multilabel_classification_005.png) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml  \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="multilabel_classification_005.png"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/image_multilabel_classification.md

@@ -257,7 +257,7 @@ tar -xf ./dataset/mlcls_nus_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mlcls_nus_examples
 ```
@@ -345,13 +345,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 </code></pre>
 <p>数据转换执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.convert.enable=True \
@@ -377,13 +377,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -395,7 +395,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处图像多标签分类模型 PP-LCNet_x1_0_ML 的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/mlcls_nus_examples
 ```
@@ -426,7 +426,7 @@ python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/mlcls_nus_examples
 ```
@@ -449,7 +449,7 @@ python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_
 
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/multilabel_classification_005.png)到本地。
 ```bash
-python main.py -c paddlex/configs/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml  \
+python main.py -c paddlex/configs/modules/image_multilabel_classification/PP-LCNet_x1_0_ML.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="multilabel_classification_005.png"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/instance_segmentation.en.md

@@ -198,7 +198,7 @@ tar -xf ./dataset/instance_seg_coco_examples.tar -C ./dataset/
 Data verification can be completed with a single command:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_coco_examples
 ```
@@ -269,13 +269,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml\
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml\
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 </code></pre>
 <p>After the data conversion is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support being set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml\
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml\
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -300,13 +300,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 </code></pre>
 <p>After data splitting, the original annotation files will be renamed as <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples \
     -o CheckDataset.split.enable=True \
@@ -318,7 +318,7 @@ CheckDataset:
 A single command can complete model training. Taking the training of the instance segmentation model Mask-RT-DETR-L as an example:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/instance_seg_coco_examples
 ```
@@ -350,7 +350,7 @@ After completing model training, you can evaluate the specified model weights fi
 
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/instance_seg_coco_examples
 ```
@@ -373,7 +373,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction via the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_instance_segmentation_004.png) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_instance_segmentation_004.png"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/instance_segmentation.md

@@ -389,7 +389,7 @@ tar -xf ./dataset/instance_seg_coco_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_coco_examples
 ```
@@ -460,13 +460,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml\
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml\
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 </code></pre>
 <p>数据转换执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml\
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml\
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -491,13 +491,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/instance_seg_labelme_examples \
     -o CheckDataset.split.enable=True \
@@ -509,7 +509,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处实例分割模型 Mask-RT-DETR-L 的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/instance_seg_coco_examples
 ```
@@ -540,7 +540,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/instance_seg_coco_examples
 ```
@@ -563,7 +563,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_instance_segmentation_004.png)到本地。
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_instance_segmentation_004.png"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/mainbody_detection.en.md

@@ -73,7 +73,7 @@ tar -xf ./dataset/mainbody_det_examples.tar -C ./dataset/
 You can complete data validation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 ```
@@ -145,13 +145,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in their original paths.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mainbody_det_examples \
     -o CheckDataset.split.enable=True \
@@ -163,7 +163,7 @@ CheckDataset:
 Model training can be completed with a single command, taking the training of `PP-ShiTuV2_det` as an example:
 
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 ```
@@ -194,7 +194,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 ```
@@ -216,7 +216,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference predictions through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png) to your local machine.
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_object_detection_002.png"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -258,7 +258,7 @@ tar -xf ./dataset/mainbody_det_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 ```
@@ -330,13 +330,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/mainbody_det_examples \
     -o CheckDataset.split.enable=True \
@@ -348,7 +348,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处`PP-ShiTuV2_det`的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 ```
@@ -379,7 +379,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/mainbody_det_examples
 ```
@@ -401,7 +401,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png)到本地。
 ```bash
-python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
+python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_object_detection_002.png"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/object_detection.en.md

@@ -391,7 +391,7 @@ tar -xf ./dataset/det_coco_examples.tar -C ./dataset/
 Validate your dataset with a single command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 ```
@@ -480,12 +480,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples
 </code></pre>
 <p>Of course, the above parameters also support being set by appending command line arguments. Taking a <code>LabelMe</code> format dataset as an example:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -511,13 +511,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 </code></pre>
 <p>After dataset splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support being set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -530,7 +530,7 @@ CheckDataset:
 Model training can be completed with a single command, taking the training of the object detection model PicoDet-S as an example:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/det_coco_examples
 ```
@@ -561,7 +561,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/det_coco_examples
 ```
@@ -583,7 +583,7 @@ After completing model training and evaluation, you can use the trained model we
 
 * To perform inference predictions through the command line, use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png) to your local machine.
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml  \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_object_detection_002.png"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/object_detection.md

@@ -608,7 +608,7 @@ tar -xf ./dataset/det_coco_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 ```
@@ -696,12 +696,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples
 </code></pre>
 <p>当然,以上参数同样支持通过追加命令行参数的方式进行设置,以 <code>LabelMe</code> 格式的数据集为例:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -727,13 +727,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -745,7 +745,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处目标检测模型 `PicoDet-S` 的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/det_coco_examples
 ```
@@ -776,7 +776,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/det_coco_examples
 ```
@@ -799,7 +799,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png)到本地。
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml  \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_object_detection_002.png"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.en.md

@@ -89,7 +89,7 @@ tar -xf ./dataset/pedestrian_attribute_examples.tar -C ./dataset/
 Run a single command to complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 ```
@@ -176,13 +176,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support being set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples \
     -o CheckDataset.split.enable=True \
@@ -195,7 +195,7 @@ CheckDataset:
 Model training can be completed with a single command. Taking the training of the PP-LCNet pedestrian attribute recognition model (PP-LCNet_x1_0_pedestrian_attribute) as an example:
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 ```
@@ -226,7 +226,7 @@ the following steps are required:
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 ```
@@ -249,7 +249,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pedestrian_attribute_006.jpg) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="pedestrian_attribute_006.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md

@@ -87,7 +87,7 @@ tar -xf ./dataset/pedestrian_attribute_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 ```
@@ -174,13 +174,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples \
     -o CheckDataset.split.enable=True \
@@ -193,7 +193,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处PP-LCNet行人属性识别模型(PP-LCNet_x1_0_pedestrian_attribute)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 ```
@@ -224,7 +224,7 @@ python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/pedestrian_attribute_examples
 ```
@@ -247,7 +247,7 @@ python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pedestrian_attribute_006.jpg)到本地。
 
 ```bash
-python main.py -c paddlex/configs/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
+python main.py -c paddlex/configs/modules/pedestrian_attribute_recognition/PP-LCNet_x1_0_pedestrian_attribute.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="pedestrian_attribute_006.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/rotated_object_detection.en.md

@@ -76,7 +76,7 @@ After decompression, the dataset directory structure is as follows::
 A single command can complete data verification:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -165,13 +165,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 </code></pre>
 <p>After the dataset splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code>.</p>
 <p>The above parameters also support setting through adding command line parameters:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data \
     -o CheckDataset.split.enable=True \
@@ -183,7 +183,7 @@ CheckDataset:
 A single command can complete model training, taking the training of the rotated object detection model `PP-YOLOE-R-L` as an example:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -214,7 +214,7 @@ Other related parameters can be set by modifying the fields under Global and Tra
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -236,7 +236,7 @@ After completing model training and evaluation, you can use the trained model we
 
 * To perform inference predictions through the command line, use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/rotated_object_detection_001.png) to your local machine.
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml  \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="rotated_object_detection_001.png"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/rotated_object_detection.md

@@ -276,7 +276,7 @@ tar -xf ./dataset/rdet_dota_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -365,13 +365,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data \
     -o CheckDataset.split.enable=True \
@@ -383,7 +383,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处旋转目标检测模型 `PP-YOLOE-R-L` 的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -414,7 +414,7 @@ python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -437,7 +437,7 @@ python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
 
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/rotated_object_detection_001.png)到本地。
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml  \
+python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="rotated_object_detection_001.png"

+ 9 - 9
docs/module_usage/tutorials/cv_modules/semantic_segmentation.en.md

@@ -233,7 +233,7 @@ tar -xf ./dataset/seg_optic_examples.tar -C ./dataset/
 Data validation can be completed with a single command:
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 ```
@@ -295,7 +295,7 @@ After executing the above command, PaddleX will verify the dataset and collect b
 <pre><code class="language-bash">wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/seg_dataset_to_convert.tar -P ./dataset
 tar -xf ./dataset/seg_dataset_to_convert.tar -C ./dataset/
 </code></pre>
-<p>After downloading, modify the <code>paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml</code> configuration as follows:</p>
+<p>After downloading, modify the <code>paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml</code> configuration as follows:</p>
 <pre><code class="language-bash">......
 CheckDataset:
   ......
@@ -305,12 +305,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_dataset_to_convert
 </code></pre>
 <p>Of course, the above parameters also support being set by appending command-line arguments. For a <code>LabelMe</code> format dataset, the command is:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_dataset_to_convert \
     -o CheckDataset.convert.enable=True \
@@ -335,13 +335,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_optic_examples \
     -o CheckDataset.split.enable=True \
@@ -354,7 +354,7 @@ CheckDataset:
 Model training can be completed with just one command. Here, we use the semantic segmentation model (PP-LiteSeg-T) as an example:
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 ```
@@ -387,7 +387,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After model training, you can evaluate the specified model weights on the validation set to verify model accuracy. Using PaddleX for model evaluation requires just one command:
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 ```
@@ -416,7 +416,7 @@ To perform inference predictions via the command line, use the following command
 
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model" \
     -o Predict.input="general_semantic_segmentation_002.png"

+ 9 - 9
docs/module_usage/tutorials/cv_modules/semantic_segmentation.md

@@ -417,7 +417,7 @@ tar -xf ./dataset/seg_optic_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 ```
@@ -478,7 +478,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 <pre><code class="language-bash">wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/seg_dataset_to_convert.tar -P ./dataset
 tar -xf ./dataset/seg_dataset_to_convert.tar -C ./dataset/
 </code></pre>
-<p>下载后,需要修改 <code>paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml</code>配置如下:</p>
+<p>下载后,需要修改 <code>paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml</code>配置如下:</p>
 <pre><code class="language-bash">......
 CheckDataset:
   ......
@@ -488,12 +488,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_dataset_to_convert
 </code></pre>
 <p>当然,以上参数同样支持通过追加命令行参数的方式进行设置,以 <code>LabelMe</code> 格式的数据集为例:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_dataset_to_convert \
     -o CheckDataset.convert.enable=True \
@@ -518,13 +518,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 xxx.bak。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/seg_optic_examples \
     -o CheckDataset.split.enable=True \
@@ -536,7 +536,7 @@ CheckDataset:
 一条命令即可完成模型的训练,此处以移动端语义分割模型PP-LiteSeg-T的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 ```
@@ -569,7 +569,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/seg_optic_examples
 ```
@@ -593,7 +593,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_semantic_segmentation_002.png)到本地。
 
 ```bash
-python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
+python main.py -c paddlex/configs/modules/semantic_segmentation/PP-LiteSeg-T.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model" \
     -o Predict.input="general_semantic_segmentation_002.png"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/small_object_detection.en.md

@@ -88,7 +88,7 @@ tar -xf ./dataset/small_det_examples.tar -C ./dataset/
 You can complete data validation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/small_det_examples
 ```
@@ -158,12 +158,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset
 </code></pre>
 <p>Of course, the above parameters also support being set by appending command line arguments. Taking a <code>LabelMe</code> format dataset as an example:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset \
     -o CheckDataset.convert.enable=True \
@@ -188,13 +188,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/small_det_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in their original paths.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/small_det_examples \
     -o CheckDataset.split.enable=True \
@@ -206,7 +206,7 @@ CheckDataset:
 Model training can be completed with a single command, taking the training of `PP-YOLOE_plus_SOD-S` as an example:
 
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/small_det_examples \
     -o Train.num_classes=10
@@ -238,7 +238,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/small_det_examples
 ```
@@ -260,7 +260,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference predictions through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/small_object_detection.jpg) to your local machine.
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="small_object_detection.jpg"

+ 8 - 8
docs/module_usage/tutorials/cv_modules/small_object_detection.md

@@ -275,7 +275,7 @@ tar -xf ./dataset/small_det_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/small_det_examples
 ```
@@ -345,12 +345,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset
 </code></pre>
 <p>当然,以上参数同样支持通过追加命令行参数的方式进行设置,以 <code>LabelMe</code> 格式的数据集为例:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset \
     -o CheckDataset.convert.enable=True \
@@ -375,13 +375,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/small_det_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/small_det_examples \
     -o CheckDataset.split.enable=True \
@@ -393,7 +393,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处PP-YOLOE_plus_SOD-S的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/small_det_examples \
     -o Train.num_classes=10
@@ -425,7 +425,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yam
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/small_det_examples
 ```
@@ -447,7 +447,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yam
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/small_object_detection.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
+python main.py -c paddlex/configs/modules/small_object_detection/PP-YOLOE_plus_SOD-S.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="small_object_detection.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.en.md

@@ -73,7 +73,7 @@ tar -xf ./dataset/vehicle_attribute_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 ```
@@ -161,13 +161,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples \
     -o CheckDataset.split.enable=True \
@@ -179,7 +179,7 @@ CheckDataset:
 Training a model can be done with a single command, taking the training of the PP-LCNet vehicle attribute recognition model (`PP-LCNet_x1_0_vehicle_attribute`) as an example:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 ```
@@ -210,7 +210,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 ```
@@ -235,7 +235,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_attribute_007.jpg) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="vehicle_attribute_007.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md

@@ -70,7 +70,7 @@ tar -xf ./dataset/vehicle_attribute_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 ```
@@ -157,13 +157,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples \
     -o CheckDataset.split.enable=True \
@@ -175,7 +175,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处 PP-LCNet 车辆属性识别模型(`PP-LCNet_x1_0_vehicle_attribute`)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 ```
@@ -206,7 +206,7 @@ python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_ve
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml  \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/vehicle_attribute_examples
 ```
@@ -229,7 +229,7 @@ python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_ve
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_attribute_007.jpg)到本地。
 
 ```bash
-python main.py -c paddlex/configs/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
+python main.py -c paddlex/configs/modules/vehicle_attribute_recognition/PP-LCNet_x1_0_vehicle_attribute.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="vehicle_attribute_007.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/vehicle_detection.en.md

@@ -78,7 +78,7 @@ tar -xf ./dataset/vehicle_coco_examples.tar -C ./dataset/
 You can complete data validation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 ```
@@ -150,13 +150,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in their original paths.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -168,7 +168,7 @@ CheckDataset:
 Model training can be completed with a single command, taking the training of `PP-YOLOE-S_vehicle` as an example:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 ```
@@ -199,7 +199,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 ```
@@ -231,7 +231,7 @@ The weights you produced can be directly integrated into the object detection mo
 
 * To perform inference predictions through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_detection.jpg) to your local machine.
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="vehicle_detection.jpg"

+ 6 - 6
docs/module_usage/tutorials/cv_modules/vehicle_detection.md

@@ -263,7 +263,7 @@ tar -xf ./dataset/vehicle_coco_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 ```
@@ -335,13 +335,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -353,7 +353,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处`PP-YOLOE-S_vehicle`的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 ```
@@ -384,7 +384,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/vehicle_coco_examples
 ```
@@ -416,7 +416,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_detection.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
+python main.py -c paddlex/configs/modules/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="vehicle_detection.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.en.md

@@ -69,7 +69,7 @@ tar -xf ./dataset/text_image_orientation.tar  -C ./dataset/
 Data validation can be completed with a single command:
 
 ```bash
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/text_image_orientation
 ```
@@ -158,13 +158,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/text_image_orientation
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/text_image_orientation \
     -o CheckDataset.split.enable=True \
@@ -178,7 +178,7 @@ CheckDataset:
 Model training can be completed with just one command. Here, we use the document image orientation classification model (PP-LCNet_x1_0_doc_ori) as an example:
 
 ```bash
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/text_image_orientation
 ```
@@ -212,7 +212,7 @@ Other relevant parameters can be set by modifying fields under `Global` and `Tra
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. With PaddleX, model evaluation can be done with just one command:
 
 ```bash
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/text_image_orientation
 ```
@@ -247,7 +247,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference predictions via the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="img_rot180_demo.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md

@@ -235,7 +235,7 @@ tar -xf ./dataset/text_image_orientation.tar  -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/text_image_orientation
 ```
@@ -322,13 +322,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/text_image_orientation
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/text_image_orientation \
     -o CheckDataset.split.enable=True \
@@ -340,7 +340,7 @@ CheckDataset:
 一条命令即可完成模型的训练,此处以文档图像方向分类模型(PP-LCNet_x1_0_doc_ori)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/text_image_orientation
 ```
@@ -371,7 +371,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/text_image_orientation
 ```
@@ -395,7 +395,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg)到本地。
 
 ```
-python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
+python main.py -c paddlex/configs/modules/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="img_rot180_demo.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/layout_detection.en.md

@@ -131,7 +131,7 @@ tar -xf ./dataset/det_layout_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_layout_examples
 ```
@@ -205,13 +205,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_layout_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_layout_examples \
     -o CheckDataset.split.enable=True \
@@ -224,7 +224,7 @@ CheckDataset:
 A single command is sufficient to complete model training, taking the training of PicoDet-L_layout_3cls as an example:
 
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/det_layout_examples
 ```
@@ -256,7 +256,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation, you can complete the evaluation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/det_layout_examples
 ```
@@ -278,7 +278,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * To perform inference predictions through the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg) to your local machine.
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="layout.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/layout_detection.md

@@ -523,7 +523,7 @@ tar -xf ./dataset/det_layout_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_layout_examples
 ```
@@ -595,13 +595,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_layout_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code>python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml  \
+<pre><code>python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_layout_examples \
     -o CheckDataset.split.enable=True \
@@ -613,7 +613,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处`PicoDet-L_layout_3cls`的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/det_layout_examples
 ```
@@ -644,7 +644,7 @@ python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/det_layout_examples
 ```
@@ -666,7 +666,7 @@ python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
 #### 4.4.1 模型推理
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/layout_detection/PicoDet-L_layout_3cls.yaml \
+python main.py -c paddlex/configs/modules/layout_detection/PicoDet-L_layout_3cls.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="layout.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/seal_text_detection.en.md

@@ -82,7 +82,7 @@ tar -xf ./dataset/ocr_curve_det_dataset_examples.tar -C ./dataset/
 Data validation can be completed with a single command:
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 ```
@@ -167,13 +167,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -186,7 +186,7 @@ CheckDataset:
 Model training can be completed with just one command. Here, we use the Seal Text Detection model (PP-OCRv4_server_seal_det) as an example:
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 ```
@@ -220,7 +220,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After model training, you can evaluate the specified model weights on the validation set to verify model accuracy. Using PaddleX for model evaluation requires just one command:
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 ```
@@ -249,7 +249,7 @@ To perform inference predictions via the command line, use the following command
 
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="seal_text_det.png"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/seal_text_detection.md

@@ -392,7 +392,7 @@ tar -xf ./dataset/ocr_curve_det_dataset_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 ```
@@ -477,13 +477,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -495,7 +495,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处PP-OCRv4服务端印章文本检测模型(PP-OCRv4_server_seal_det)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 ```
@@ -526,7 +526,7 @@ python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.y
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_curve_det_dataset_examples
 ```
@@ -550,7 +550,7 @@ python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.y
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png)到本地。
 
 ```bash
-python main.py -c paddlex/configs/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
+python main.py -c paddlex/configs/modules/seal_text_detection/PP-OCRv4_server_seal_det.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="seal_text_det.png"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.en.md

@@ -77,7 +77,7 @@ tar -xf ./dataset/table_rec_dataset_examples.tar -C ./dataset/
 Run a single command to complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 ```
@@ -157,13 +157,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in their original paths.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -176,7 +176,7 @@ CheckDataset:
 A single command can complete the model training. Taking the training of the table structure recognition model SLANet as an example:
 
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 ```
@@ -206,7 +206,7 @@ the following steps are required:
 ## <b>4.3 Model Evaluation</b>
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 ```
@@ -228,7 +228,7 @@ After completing model training and evaluation, you can use the trained model we
 #### 4.4.1 Model Inference
 * Inference predictions can be performed through the command line with just one command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg) to your local machine.
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml  \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="table_recognition.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md

@@ -74,7 +74,7 @@ tar -xf ./dataset/table_rec_dataset_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 ```
@@ -154,13 +154,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -172,7 +172,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处表格结构识别模型 SLANet 的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 ```
@@ -203,7 +203,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/table_rec_dataset_examples
 ```
@@ -226,7 +226,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg)到本地。
 ```bash
-python main.py -c paddlex/configs/table_recognition/SLANet.yaml  \
+python main.py -c paddlex/configs/modules/table_recognition/SLANet.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="table_recognition.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/text_detection.en.md

@@ -75,7 +75,7 @@ tar -xf ./dataset/ocr_det_dataset_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 ```
@@ -144,13 +144,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -162,7 +162,7 @@ CheckDataset:
 Model training can be completed with a single command. Here's an example of training the PP-OCRv4 mobile text detection model (`PP-OCRv4_mobile_det`):
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 ```
@@ -194,7 +194,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 ```
@@ -219,7 +219,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference predictions via the command line, simply use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_001.png) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="general_ocr_001.png"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/text_detection.md

@@ -375,7 +375,7 @@ tar -xf ./dataset/ocr_det_dataset_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 ```
@@ -443,13 +443,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -461,7 +461,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处 PP-OCRv4 移动端文本检测模型(`PP-OCRv4_mobile_det`)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 ```
@@ -492,7 +492,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_det_dataset_examples
 ```
@@ -516,7 +516,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_001.png)到本地。
 
 ```bash
-python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
+python main.py -c paddlex/configs/modules/text_detection/PP-OCRv4_mobile_det.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="general_ocr_001.png"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/text_recognition.en.md

@@ -446,7 +446,7 @@ tar -xf ./dataset/ocr_rec_dataset_examples.tar -C ./dataset/
 A single command can complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 ```
@@ -516,13 +516,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 </code></pre>
 <p>After data splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -534,7 +534,7 @@ CheckDataset:
 Model training can be completed with a single command. Here's an example of training the PP-OCRv4 mobile text recognition model (PP-OCRv4_mobile_rec):
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 ```
@@ -568,7 +568,7 @@ After completing model training, you can evaluate the specified model weights fi
 ```bash
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 
@@ -596,7 +596,7 @@ Before running the following code, please download the [demo image](https://padd
 
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="general_ocr_rec_001.png"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/text_recognition.md

@@ -487,7 +487,7 @@ tar -xf ./dataset/ocr_rec_dataset_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 ```
@@ -555,13 +555,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -573,7 +573,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处 PP-OCRv4 移动端文本识别模型(PP-OCRv4_mobile_rec)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 ```
@@ -606,7 +606,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
 
@@ -630,7 +630,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png)到本地。
 
 ```bash
-python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
+python main.py -c paddlex/configs/modules/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_accuracy/inference" \
     -o Predict.input="general_ocr_rec_001.png"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.en.md

@@ -67,7 +67,7 @@ tar -xf ./dataset/textline_orientation_example_data.tar -C ./dataset/
 You can complete data validation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 ```
@@ -154,13 +154,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command-line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data \
     -o CheckDataset.split.enable=True \
@@ -172,7 +172,7 @@ CheckDataset:
 Model training can be completed with a single command. Here, the training of the text line orientation classification model (PP-LCNet_x1_0_textline_ori) is taken as an example:
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 ```
@@ -203,7 +203,7 @@ Other related parameters can be set by modifying the fields under `Global` and `
 After completing model training, you can evaluate the specified model weights on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 ```
@@ -227,7 +227,7 @@ After completing model training and evaluation, you can use the trained model we
 Performing inference predictions through the command line requires only the following single command. Before running the following code, please download the [example image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/textline_rot180_demo.jpg) locally.
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="textline_rot180_demo.jpg"

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/textline_orientation_classification.md

@@ -236,7 +236,7 @@ tar -xf ./dataset/textline_orientation_example_data.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 ```
@@ -323,13 +323,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data \
     -o CheckDataset.split.enable=True \
@@ -341,7 +341,7 @@ CheckDataset:
 一条命令即可完成模型的训练,此处以文本行方向分类模型(PP-LCNet_x1_0_textline_ori)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 ```
@@ -372,7 +372,7 @@ python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_o
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ``` bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/textline_orientation_example_data
 ```
@@ -396,7 +396,7 @@ python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_o
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/textline_rot180_demo.jpg)到本地。
 
 ```bash
-python main.py -c paddlex/configs/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
+python main.py -c paddlex/configs/modules/textline_orientation/PP-LCNet_x0_25_textline_ori.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="textline_rot180_demo.jpg"

+ 8 - 8
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.en.md

@@ -97,7 +97,7 @@ tar -xf ./dataset/ts_anomaly_examples.tar -C ./dataset/
 #### 4.1.2 Data Validation
 You can complete data validation with a single command:
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 ```
@@ -188,12 +188,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 </code></pre>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples \
     -o CheckDataset.convert.enable=True
@@ -221,13 +221,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples \
     -o CheckDataset.split.enable=True \
@@ -239,7 +239,7 @@ CheckDataset:
 Model training can be completed with just one command. Here, we use the Time Series Forecasting model (AutoEncoder_ad) as an example:
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 ```
@@ -273,7 +273,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 ```
@@ -299,7 +299,7 @@ To perform inference predictions through the command line, simply use the follow
 Before running the following code, please download the [demo csv](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_ad.csv) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/inference" \
     -o Predict.input="ts_ad.csv"

+ 9 - 9
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md

@@ -78,7 +78,7 @@ for res in output:
 运行后,得到的结果为:
 ```bash
 {'res': {'input_path': 'ts_ad.csv', 'anomaly':            label
-timestamp       
+timestamp
 220226         1
 220227         1
 220228         0
@@ -269,7 +269,7 @@ tar -xf ./dataset/ts_anomaly_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 ```
@@ -358,12 +358,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 </code></pre>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples \
     -o CheckDataset.convert.enable=True
@@ -391,13 +391,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples \
     -o CheckDataset.split.enable=True \
@@ -409,7 +409,7 @@ CheckDataset:
 一条命令即可完成模型的训练,此处以时序异常检测模型(AutoEncoder_ad)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 ```
@@ -440,7 +440,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ts_anomaly_examples
 ```
@@ -465,7 +465,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例数据](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_ad.csv)到本地。
 
 ```bash
-python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
+python main.py -c paddlex/configs/modules/ts_anomaly_detection/AutoEncoder_ad.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/inference" \
     -o Predict.input="ts_ad.csv"

+ 8 - 8
docs/module_usage/tutorials/time_series_modules/time_series_classification.en.md

@@ -62,7 +62,7 @@ tar -xf ./dataset/ts_classify_examples.tar -C ./dataset/
 You can complete data validation with a single command:
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 ```
@@ -163,12 +163,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 </code></pre>
 <p>The above parameters can also be set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples \
     -o CheckDataset.convert.enable=True
@@ -196,13 +196,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters can also be set by appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples \
     -o CheckDataset.split.enable=True \
@@ -216,7 +216,7 @@ CheckDataset:
 Model training can be completed with just one command. Here, we use the Time Series Forecasting model (TimesNet_cls) as an example:
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 ```
@@ -250,7 +250,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 ```
@@ -276,7 +276,7 @@ To perform inference prediction via the command line, simply use the following c
 Before running the following code, please download the [demo csv](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_cls.csv) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/inference" \
     -o Predict.input="ts_cls.csv"

+ 8 - 8
docs/module_usage/tutorials/time_series_modules/time_series_classification.md

@@ -236,7 +236,7 @@ tar -xf ./dataset/ts_classify_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 ```
@@ -337,12 +337,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 </code></pre>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples \
     -o CheckDataset.convert.enable=True
@@ -370,13 +370,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code>python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+<pre><code>python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_classify_examples \
     -o CheckDataset.split.enable=True \
@@ -388,7 +388,7 @@ CheckDataset:
 一条命令即可完成模型的训练,此处以时序分类模型(TimesNet_cls)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 ```
@@ -419,7 +419,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ts_classify_examples
 ```
@@ -443,7 +443,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例数据](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_cls.csv)到本地。
 
 ```bash
-python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
+python main.py -c paddlex/configs/modules/ts_classification/TimesNet_cls.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/inference" \
     -o Predict.input="ts_cls.csv"

+ 9 - 9
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.en.md

@@ -98,7 +98,7 @@ tar -xf ./dataset/ts_dataset_examples.tar -C ./dataset/
 Data validation can be completed with a single command:
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 ```
@@ -211,7 +211,7 @@ The verification results above indicate that `check_pass` being `True` means the
 <li><code>enable</code>: Whether to enable dataset format conversion, supporting <code>xlsx</code> and <code>xls</code> format conversion, default is <code>False</code>;</li>
 <li><code>src_dataset_type</code>: If dataset format conversion is enabled, the source dataset format needs to be set, default is <code>null</code>.</li>
 </ul>
-<p>Modify the <code>paddlex/configs/ts_forecast/DLinear.yaml</code> configuration as follows:</p>
+<p>Modify the <code>paddlex/configs/modules/ts_forecast/DLinear.yaml</code> configuration as follows:</p>
 <pre><code class="language-bash">......
 CheckDataset:
   ......
@@ -221,12 +221,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_forecast_to_convert
 </code></pre>
 <p>Of course, the above parameters also support being set by appending command-line arguments. For a <code>LabelMe</code> format dataset, the command is:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_forecast_to_convert \
     -o CheckDataset.convert.enable=True \
@@ -250,13 +250,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 </code></pre>
 <p>After dataset splitting, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>The above parameters also support setting through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml  \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -269,7 +269,7 @@ CheckDataset:
 Model training can be completed with just one command. Here, we use the Time Series Forecasting model (DLinear) as an example:
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 ```
@@ -302,7 +302,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 After model training, you can evaluate the specified model weights on the validation set to verify model accuracy. Using PaddleX for model evaluation requires just one command:
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 ```
@@ -332,7 +332,7 @@ To perform inference predictions via the command line, use the following command
 Before running the following code, please download the [demo csv](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_fc.csv) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/inference" \
     -o Predict.input="ts_fc.csv"

+ 9 - 9
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md

@@ -94,7 +94,7 @@ for res in output:
 运行后,得到的结果为:
 ```bash
 {'res': {'input_path': 'ts_fc.csv', 'forecast':                            OT
-date                         
+date
 2018-06-26 20:00:00  9.586131
 2018-06-26 21:00:00  9.379762
 2018-06-26 22:00:00  9.252275
@@ -284,7 +284,7 @@ tar -xf ./dataset/ts_dataset_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 ```
@@ -405,12 +405,12 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 </code></pre>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples \
     -o CheckDataset.convert.enable=True
@@ -438,13 +438,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/ts_dataset_examples \
     -o CheckDataset.split.enable=True \
@@ -456,7 +456,7 @@ CheckDataset:
 一条命令即可完成模型的训练,以此处高效率时序预测模型(DLinear)的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 ```
@@ -487,7 +487,7 @@ python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ts_dataset_examples
 ```
@@ -511,7 +511,7 @@ python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例数据](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_fc.csv)到本地。
 
 ```bash
-python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
+python main.py -c paddlex/configs/modules/ts_forecast/DLinear.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/inference" \
     -o Predict.input="ts_fc.csv"

+ 6 - 6
docs/module_usage/tutorials/video_modules/video_classification.en.md

@@ -75,7 +75,7 @@ tar -xf ./dataset/k400_examples.tar -C ./dataset/
 One command is all you need to complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
+python main.py -c paddlex/configs/modules/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/k400_examples
 ```
@@ -160,13 +160,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/k400_examples
 </code></pre>
 <p>After the data splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code> in the original path.</p>
 <p>These parameters also support being set through appending command line arguments:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/modules/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/k400_examples \
     -o CheckDataset.split.enable=True \
@@ -177,7 +177,7 @@ CheckDataset:
 ### 4.2 Model Training
 A single command can complete the model training. Taking the training of the video classification model PP-TSMv2-LCNetV2_8frames_uniform as an example:
 ```
-python main.py -c paddlex/configs/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml  \
+python main.py -c paddlex/configs/modules/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml  \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/k400_examples
 ```
@@ -208,7 +208,7 @@ the following steps are required:
 ## <b>4.3 Model Evaluation</b>
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model accuracy. Using PaddleX for model evaluation, a single command can complete the model evaluation:
 ```bash
-python main.py -c  paddlex/configs/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml  \
+python main.py -c  paddlex/configs/modules/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml  \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/k400_examples
 ```
@@ -230,7 +230,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo video](https://paddle-model-ecology.bj.bcebos.com/paddlex/videos/demo_video/general_video_classification_001.mp4) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
+python main.py -c paddlex/configs/modules/video_classification/PP-TSMv2-LCNetV2_8frames_uniform.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="general_video_classification_001.mp4"

+ 4 - 4
docs/module_usage/tutorials/video_modules/video_detection.en.md

@@ -65,7 +65,7 @@ tar -xf ./dataset/video_det_examples.tar -C ./dataset/
 One command is all you need to complete data validation:
 
 ```bash
-python main.py -c paddlex/configs/video_detection/YOWO.yaml \
+python main.py -c paddlex/configs/modules/video_detection/YOWO.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/video_det_examples
 ```
@@ -139,7 +139,7 @@ After completing data validation, you can convert the dataset format or re-split
 ### 4.2 Model Training
 A single command can complete the model training. Taking the training of the video Detection model YOWO as an example:
 ```
-python main.py -c paddlex/configs/video_det_examples/YOWO.yaml  \
+python main.py -c paddlex/configs/modules/video_det_examples/YOWO.yaml  \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/video_det_examples
 ```
@@ -170,7 +170,7 @@ the following steps are required:
 ## <b>4.3 Model Evaluation</b>
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model accuracy. Using PaddleX for model evaluation, a single command can complete the model evaluation:
 ```bash
-python main.py -c  paddlex/configs/video_detection/YOWO.yaml  \
+python main.py -c  paddlex/configs/modules/video_detection/YOWO.yaml  \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/video_det_examples
 ```
@@ -192,7 +192,7 @@ After completing model training and evaluation, you can use the trained model we
 To perform inference prediction through the command line, simply use the following command. Before running the following code, please download the [demo video](https://paddle-model-ecology.bj.bcebos.com/paddlex/videos/demo_video/HorseRiding.avi) to your local machine.
 
 ```bash
-python main.py -c paddlex/configs/video_detection/YOWO.yaml \
+python main.py -c paddlex/configs/modules/video_detection/YOWO.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="HorseRiding.avi"