Zhang Zelun 10 месяцев назад
Родитель
Сommit
b25d686575

+ 8 - 10
docs/module_usage/tutorials/cv_modules/human_detection.md

@@ -48,9 +48,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model_name = "PP-YOLOE-S_human"
-model = create_model(model_name)
-output = model.predict("human_detection.jpg", batch_size=1)
+model = create_model(model_name="PP-YOLOE-S_human")
+output = model.predict(input="human_detection.jpg", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -71,8 +70,7 @@ for res in output:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/human_detection/human_detection_res.jpg">
-
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/human_detection/human_detection_res.jpg">
 
 相关方法、参数等说明如下:
 
@@ -104,7 +102,7 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float/None/dict</code></td>
+<td><code>float/None/dict[int, float]</code></td>
 <td>无</td>
 <td>None</td>
 </tr>
@@ -113,7 +111,7 @@ for res in output:
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。
 * `threshold`为低分object过滤阈值,默认为None,表示使用上一层设置,参数设置的优先级从高到低为:`predict参数传入 > create_model初始化传入 > yaml配置文件设置`。目前支持float和dict两种阈值设置方式:
   * `float`, 对于所有的类别使用同一个阈值。
-  * `dict`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。行人检测为单类别检测,无需此设置。
+  * `dict[int, float]`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。行人检测为单类别检测,无需此设置。
 
 * 调用行人检测模型的 `predict()` 方法进行推理预测,`predict()` 方法参数有 `input` 、 `batch_size` 和 `threshold`,具体说明如下:
 
@@ -152,12 +150,12 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float</code>/<code>dict</code>/<code>None</code></td>
+<td><code>float</code>/<code>dict[int, float]</code>/<code>None</code></td>
 <td>
 <ul>
   <li><b>None</b>,表示沿用上一层设置, 参数设置优先级从高到低为: <code>predict参数传入 > create_model初始化传入 > yaml配置文件设置</code></li>
-  <li><b>float</b>,如0.5,表示推理时使用0.5作为低分object过滤阈值</li>
-  <li><b>dict</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。行人检测为单类别检测,无需此设置。</li>
+  <li><b>float</b>,对于所有的类别使用同一个阈值。如0.5,表示推理时使用0.5作为所有类别的低分object过滤阈值</li>
+  <li><b>dict[int, float]</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。行人检测为单类别检测,无需此设置。</li>
 </ul>
 </td>
 <td>None</td>

+ 6 - 7
docs/module_usage/tutorials/cv_modules/instance_segmentation.md

@@ -170,8 +170,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model = create_model("PP-YOLOE_seg-S")
-output = model.predict("general_instance_segmentation_004.png", batch_size=1)
+model = create_model(model_name="Mask-RT-DETR-L")
+output = model.predict(input="general_instance_segmentation_004.png", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -182,7 +182,7 @@ for res in output:
 
 运行后,得到的结果为:
 ```bash
-{'res': "{'input_path': 'general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.8723232746124268, 'coordinate': [88.34339, 109.87673, 401.85236, 575.59576]}, {'cls_id': 0, 'label': 'person', 'score': 0.8711188435554504, 'coordinate': [325.114, 1.1152496, 644.10266, 575.359]}, {'cls_id': 0, 'label': 'person', 'score': 0.842758297920227, 'coordinate': [514.18964, 21.760618, 768, 576]}, {'cls_id': 0, 'label': 'person', 'score': 0.8332827091217041, 'coordinate': [0.105075076, 0, 189.23515, 575.9612]}], 'masks': '...'}"}
+{'res': {'input_path': 'general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.897335946559906, 'coordinate': [0, 0.46382904052734375, 195.22256469726562, 572.8294067382812]}, {'cls_id': 0, 'label': 'person', 'score': 0.8606418967247009, 'coordinate': [341.30389404296875, 0, 640.4802856445312, 575.7348022460938]}, {'cls_id': 0, 'label': 'person', 'score': 0.6397128105163574, 'coordinate': [520.0907592773438, 23.334789276123047, 767.5140380859375, 574.5650634765625]}, {'cls_id': 0, 'label': 'person', 'score': 0.6008261442184448, 'coordinate': [91.02522277832031, 112.34088897705078, 405.4962158203125, 574.1039428710938]}, {'cls_id': 0, 'label': 'person', 'score': 0.5031726360321045, 'coordinate': [200.81265258789062, 58.161617279052734, 272.8892517089844, 140.88356018066406]}], 'masks': '...'}}
 ```
 运行结果参数含义如下:
 - `input_path`: 表示输入待预测图像的路径
@@ -191,16 +191,15 @@ for res in output:
   - `label`: 类别名称
   - `score`: 预测得分
   - `coordinate`: 预测框的坐标,格式为<code>[xmin, ymin, xmax, ymax]</code>
-- `pred`: 实例分割模型实际预测的mask,由于数据过大不便于直接print,所以此处用`...`替换,可以通过`res.save_to_img()`将预测结果保存为图片,通过`res.save_to_json()`将预测结果保存为json文件。
+- `masks`: 实例分割模型实际预测的mask,由于数据过大不便于直接print,所以此处用`...`替换,可以通过`res.save_to_img()`将预测结果保存为图片,通过`res.save_to_json()`将预测结果保存为json文件。
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/instance_segmentation/general_instance_segmentation_004_res.png">
-
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/instanceseg/general_instance_segmentation_004_res.png">
 
 相关方法、参数等说明如下:
 
-* `create_model`实例化通用实例分割模型(此处以`PP-YOLOE_seg-S`为例),具体说明如下:
+* `create_model`实例化通用实例分割模型(此处以`Mask-RT-DETR-L`为例),具体说明如下:
 <table>
 <thead>
 <tr>

+ 8 - 9
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -40,9 +40,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model_name = "PP-ShiTuV2_det"
-model = create_model(model_name)
-output = model.predict("general_object_detection_002.png", batch_size=1)
+model = create_model(model_name="PP-ShiTuV2_det")
+output = model.predict(input="general_object_detection_002.png", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -63,7 +62,7 @@ for res in output:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/mainbody_detection/general_object_detection_002_res.png">
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/mainbody_detection/general_object_detection_002_res.png">
 
 
 相关方法、参数等说明如下:
@@ -96,7 +95,7 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float/None/dict</code></td>
+<td><code>float/None/dict[int, float]</code></td>
 <td>无</td>
 <td>None</td>
 </tr>
@@ -105,7 +104,7 @@ for res in output:
 * 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。
 * `threshold`为低分object过滤阈值,默认为None,表示使用上一层设置,参数设置的优先级从高到低为:`predict参数传入 > create_model初始化传入 > yaml配置文件设置`。目前支持float和dict两种阈值设置方式:
   * `float`, 对于所有的类别使用同一个阈值。
-  * `dict`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。主体检测为单类别检测,无需此设置。
+  * `dict[int, float]`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。主体检测为单类别检测,无需此设置。
 
 * 调用主体检测模型的 `predict()` 方法进行推理预测,`predict()` 方法参数有 `input` 、 `batch_size` 和 `threshold`,具体说明如下:
 
@@ -144,12 +143,12 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float</code>/<code>dict</code>/<code>None</code></td>
+<td><code>float</code>/<code>dict[int, float]</code>/<code>None</code></td>
 <td>
 <ul>
   <li><b>None</b>,表示沿用上一层设置, 参数设置优先级从高到低为: <code>predict参数传入 > create_model初始化传入 > yaml配置文件设置</code></li>
-  <li><b>float</b>,如0.5,表示推理时使用<code>0.5</code>作为低分object过滤阈值</li>
-  <li><b>dict</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。主体检测为单类别检测,无需此设置。</li>
+  <li><b>float</b>,对于所有的类别使用同一个阈值。如0.5,表示推理时使用0.5作为所有类别的低分object过滤阈值</li>
+  <li><b>dict[int, float]</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。主体检测为单类别检测,无需此设置。</li>
 </ul>
 </td>
 <td>None</td>

+ 4 - 4
docs/module_usage/tutorials/cv_modules/open_vocabulary_detection.md

@@ -41,8 +41,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model = create_model('GroundingDINO-T')
-results = model.predict('open_vocabulary_detection.jpg', prompt = 'bus . walking man . rearview mirror .')
+model = create_model(model_name='GroundingDINO-T')
+results = model.predict(input='open_vocabulary_detection.jpg', prompt = 'bus . walking man . rearview mirror .', batch_size=1)
 for res in results:
     res.print()
     res.save_to_img(f"./output/")
@@ -62,7 +62,7 @@ for res in results:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/open_vocabulary_detection/open_vocabulary_detection_res.jpg">
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/open_vocabulary_detection/open_vocabulary_detection_res.jpg">
 
 
 相关方法、参数等说明如下:
@@ -156,7 +156,7 @@ for res in results:
 <td>模型预测使用的提示词</td>
 <td><code>str</code></td>
 <td>任意字符串</td>
-<td>1</td>
+<td></td>
 </tr>
 </table>
 

+ 8 - 8
docs/module_usage/tutorials/cv_modules/open_vocabulary_segmentation.md

@@ -23,7 +23,7 @@ comments: true
 <td>144.9</td>
 <td>33920.7</td>
 <td>2433.7</td>
-<td rowspan="2">SAM(Segment Anything Model)是一种先进的图像分割模型,能够根据用户提供的简单提示(如点、框或文本)对图像中的任意对象进行分割。基于SA-1B数据集训练,有一千万的图像数据和十一亿掩码标注,在大部分场景均有较好的效果。</td>
+<td rowspan="2">SAM(Segment Anything Model)是一种先进的图像分割模型,能够根据用户提供的简单提示(如点、框或文本)对图像中的任意对象进行分割。基于SA-1B数据集训练,有一千万的图像数据和十一亿掩码标注,在大部分场景均有较好的效果。其中SAM-H_box表示使用框作为分割提示输入,SAM会分割被框包裹主的主体;SAM-H_point表示使用点作为分割提示输入,SAM会分割点所在的主体。</td>
 </tr>
 <tr>
 <td>SAM-H_point</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/SAM-H_point_infer.tar">推理模型</a></td>
@@ -43,16 +43,17 @@ comments: true
 
 ```python
 from paddlex import create_model
-model = create_model('SAM-H_box')
+model = create_model(model_name='SAM-H_box')
 results = model.predict(
-    "open_vocabulary_segmentation.jpg",
-    prompts = {
+    input="open_vocabulary_segmentation.jpg",
+    prompts={
         "box_prompt": [
             [112.9239273071289,118.38755798339844,513.7587890625,382.0570068359375],
             [4.597158432006836,263.5540771484375,92.20092010498047,336.5640869140625],
             [592.3548583984375,260.8838806152344,607.1813354492188,294.2261962890625]
         ],
-    }
+    },
+    batch_size=1
 )
 for res in results:
     res.print()
@@ -62,7 +63,7 @@ for res in results:
 
 运行后,得到的结果为:
 ```bash
-{'res': "{'input_path': '000000004505.jpg', 'prompts': {'box_prompt': [[112.9239273071289, 118.38755798339844, 513.7587890625, 382.0570068359375], [4.597158432006836, 263.5540771484375, 92.20092010498047, 336.5640869140625], [592.3548583984375, 260.8838806152344, 607.1813354492188, 294.2261962890625]]}, 'masks': '...', 'mask_infos': [{'label': 'box_prompt', 'prompt': [112.9239273071289, 118.38755798339844, 513.7587890625, 382.0570068359375]}, {'label': 'box_prompt', 'prompt': [4.597158432006836, 263.5540771484375, 92.20092010498047, 336.5640869140625]}, {'label': 'box_prompt', 'prompt': [592.3548583984375, 260.8838806152344, 607.1813354492188, 294.2261962890625]}]}"}
+{'res': "{'input_path': 'open_vocabulary_segmentation.jpg', 'prompts': {'box_prompt': [[112.9239273071289, 118.38755798339844, 513.7587890625, 382.0570068359375], [4.597158432006836, 263.5540771484375, 92.20092010498047, 336.5640869140625], [592.3548583984375, 260.8838806152344, 607.1813354492188, 294.2261962890625]]}, 'masks': '...', 'mask_infos': [{'label': 'box_prompt', 'prompt': [112.9239273071289, 118.38755798339844, 513.7587890625, 382.0570068359375]}, {'label': 'box_prompt', 'prompt': [4.597158432006836, 263.5540771484375, 92.20092010498047, 336.5640869140625]}, {'label': 'box_prompt', 'prompt': [592.3548583984375, 260.8838806152344, 607.1813354492188, 294.2261962890625]}]}"}
 ```
 运行结果参数含义如下:
 - `input_path`: 表示输入待预测图像的路径
@@ -74,8 +75,7 @@ for res in results:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/open_vocabulary_segmentation/open_vocabulary_segmentation_res.jpg">
-
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/open_vocabulary_segmentation/open_vocabulary_segmentation_res.jpg">
 
 相关方法、参数等说明如下:
 

+ 13 - 14
docs/module_usage/tutorials/cv_modules/rotated_object_detection.en.md

@@ -19,7 +19,7 @@ Rotated object detection is a derivative of the object detection module, specifi
 <th>Introduction</th>
 </tr>
 <tr>
-<td>PP-YOLOE-R_L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/PP-YOLOE-R_L_infer.tar">Inference Model</a>/<a href="https://paddledet.bj.bcebos.com/models/ppyoloe_r_crn_l_3x_dota.pdparams">Training Model</a></td>
+<td>PP-YOLOE-R-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/PP-YOLOE-R-L_infer.tar">Inference Model</a>/<a href="https://paddledet.bj.bcebos.com/models/ppyoloe_r_crn_l_3x_dota.pdparams">Training Model</a></td>
 <td>78.14</td>
 <td>20.7039</td>
 <td>157.942</td>
@@ -38,7 +38,7 @@ After completing the installation of the wheel package, a few lines of code can
 
 ```python
 from paddlex import create_model
-model = create_model("PP-YOLOE-R_L")
+model = create_model("PP-YOLOE-R-L")
 output = model.predict("rotated_object_detection_001.png", batch_size=1)
 for res in output:
     res.print(json_format=False)
@@ -76,7 +76,7 @@ After decompression, the dataset directory structure is as follows::
 A single command can complete data verification:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -134,7 +134,7 @@ After executing the above command, PaddleX will verify the dataset and count the
 <li><code>attributes.val_sample_paths</code>:The relative path list of visualized validation set sample images in this dataset;</li>
 </ul>
 <p>Additionally, the dataset verification also analyzes the distribution of sample quantities for all categories in the dataset and draws a distribution histogram (histogram.png):</p>
-<p><img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/robj_det/01.png"></p></details>
+<p><img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/rotated_object_detection/01.png"></p></details>
 
 #### 4.1.3 Dataset Format Conversion/Dataset Splitting (Optional)
 After completing the data verification, you can convert the dataset format or re-split the training/validation ratio of the dataset by modifying the configuration file or adding hyperparameters.
@@ -165,13 +165,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>Then execute the command:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 </code></pre>
 <p>After the dataset splitting is executed, the original annotation files will be renamed to <code>xxx.bak</code>.</p>
 <p>The above parameters also support setting through adding command line parameters:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data \
     -o CheckDataset.split.enable=True \
@@ -180,16 +180,16 @@ CheckDataset:
 </code></pre></details>
 
 ### 4.2 Model Training
-A single command can complete model training, taking the training of the rotated object detection model `PP-YOLOE-R_L` as an example:
+A single command can complete model training, taking the training of the rotated object detection model `PP-YOLOE-R-L` as an example:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
 The following steps are required:
 
-* Specify the path of the model's `.yaml` configuration file (here it is `PP-YOLOE-R_L.yaml`. When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU))](../../../support_list/models_list.en.md))
+* Specify the path of the model's `.yaml` configuration file (here it is `PP-YOLOE-R-L.yaml`. When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU))](../../../support_list/models_list.en.md))
 * Specify the mode as model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 Other related parameters can be set by modifying the fields under Global and Train in the `.yaml` configuration file, or by adding parameters in the command line. For example, specify the first 2 GPU cards for training: `-o Global.device=gpu:0,1`; set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and detailed explanations, please refer to the configuration file instructions for the corresponding task module [PaddleX Common Model Configuration File Parameter Instructions.](../../instructions/config_parameters_common.en.md).
@@ -214,13 +214,13 @@ Other related parameters can be set by modifying the fields under Global and Tra
 After completing model training, you can evaluate the specified model weights file on the validation set to verify the model's accuracy. Using PaddleX for model evaluation can be done with a single command:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
 Similar to model training, the following steps are required:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-R_L.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-R-L.yaml`)
 * Specify the mode as model evaluation: `-o Global.mode=evaluate`
 * Specify the path to the validation dataset: `-o Global.dataset_dir`. Other related parameters can be set by modifying the `Global` and `Evaluate` fields in the `.yaml` configuration file. For details, refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common.en.md).
 
@@ -236,14 +236,14 @@ After completing model training and evaluation, you can use the trained model we
 
 * To perform inference predictions through the command line, use the following command. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/rotated_object_detection_001.png) to your local machine.
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml  \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="rotated_object_detection_001.png"
 ```
 Similar to model training and evaluation, the following steps are required:
 
-* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-R_L.yaml`)
+* Specify the `.yaml` configuration file path for the model (here it is `PP-YOLOE-R-L.yaml`)
 * Specify the mode as model inference prediction: `-o Global.mode=predict`
 * Specify the model weights path: `-o Predict.model_dir="./output/best_model/inference"`
 * Specify the input data path: `-o Predict.input="..."`
@@ -255,4 +255,3 @@ The model can be directly integrated into the PaddleX pipelines or directly into
 2.<b>Module Integration</b>
 
 The weights you produce can be directly integrated into the object detection module. Refer to the Python example code in [Quick Integration](#iii-quick-integration), and simply replace the model with the path to your trained model.
-

+ 21 - 23
docs/module_usage/tutorials/cv_modules/rotated_object_detection.md

@@ -19,7 +19,7 @@ comments: true
 <th>介绍</th>
 </tr>
 <tr>
-<td>PP-YOLOE-R_L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/PP-YOLOE-R_L_infer.tar">推理模型</a>/<a href="https://paddledet.bj.bcebos.com/models/ppyoloe_r_crn_l_3x_dota.pdparams">训练模型</a></td>
+<td>PP-YOLOE-R-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/PP-YOLOE-R-L_infer.tar">推理模型</a>/<a href="https://paddledet.bj.bcebos.com/models/ppyoloe_r_crn_l_3x_dota.pdparams">训练模型</a></td>
 <td>78.14</td>
 <td>20.7039</td>
 <td>157.942</td>
@@ -35,12 +35,10 @@ comments: true
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)
 
 完成 wheel 包的安装后,几行代码即可完成旋转目标检测模块的推理,可以任意切换该模块下的模型,您也可以将旋转目标检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/rotated_object_detection_001.png)到本地。
-
 ```python
 from paddlex import create_model
-model_name = "PP-YOLOE-R_L"
-model = create_model(model_name, img_size = 1024)
-output = model.predict("rotated_object_detection_001.png", batch_size=1, threshold=0.5)
+model = create_model(model_name="PP-YOLOE-R-L")
+output = model.predict(input="rotated_object_detection_001.png", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -61,12 +59,12 @@ for res in output:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/robj_det/rotated_object_detection_001_res.png">
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/rotated_object_detection/rotated_object_detection_001_res.png">
 
 
 相关方法、参数等说明如下:
 
-* `create_model`实例化旋转目标检测模型(此处以`PP-YOLOE-R_L`为例),具体说明如下:
+* `create_model`实例化旋转目标检测模型(此处以`PP-YOLOE-R-L`为例),具体说明如下:
 <table>
 <thead>
 <tr>
@@ -94,7 +92,7 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float/None/dict</code></td>
+<td><code>float/None/dict[int, float]</code></td>
 <td>无</td>
 <td>None</td>
 </tr>
@@ -111,7 +109,7 @@ for res in output:
 
 * `threshold`为低分object过滤阈值,默认为None,表示使用上一层设置,参数设置的优先级从高到低为:`predict参数传入 > create_model初始化传入 > yaml配置文件设置`。目前支持float和dict两种阈值设置方式:
   * `float`, 对于所有的类别使用同一个阈值。
-  * `dict`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。
+  * `dict[int, float]`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。
 
 * `img_size`为模型实际预测使用的分辨率,默认为None,表示使用上一层设置,参数设置的优先级从高到低为:`create_model初始化 > yaml配置文件设置`。
 
@@ -152,12 +150,12 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float</code>/<code>dict</code>/<code>None</code></td>
+<td><code>float</code>/<code>dict[int, float]</code>/<code>None</code></td>
 <td>
 <ul>
   <li><b>None</b>,表示沿用上一层设置, 参数设置优先级从高到低为: <code>predict参数传入 > create_model初始化传入 > yaml配置文件设置</code></li>
-  <li><b>float</b>,如0.5,表示推理时使用<code>0.5</code>作为所有类别的低分object过滤阈值</li>
-  <li><b>dict</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。</li>
+  <li><b>float</b>,对于所有的类别使用同一个阈值。如0.5,表示推理时使用0.5作为所有类别的低分object过滤阈值</li>
+  <li><b>dict[int, float]</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。</li>
 </ul>
 </td>
 <td>None</td>
@@ -278,7 +276,7 @@ tar -xf ./dataset/rdet_dota_examples.tar -C ./dataset/
 一行命令即可完成数据校验:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
@@ -336,7 +334,7 @@ python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
 <li><code>attributes.val_sample_paths</code>:该数据集验证集样本可视化图片相对路径列表;</li>
 </ul>
 <p>另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):</p>
-<p><img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/robj_det/01.png"></p></details>
+<p><img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/rotated_object_detection/01.png"></p></details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
 在您完成数据校验之后,可以通过<b>修改配置文件</b>或是<b>追加超参数</b>的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
@@ -367,13 +365,13 @@ CheckDataset:
   ......
 </code></pre>
 <p>随后执行命令:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 </code></pre>
 <p>数据划分执行之后,原有标注文件会被在原路径下重命名为 <code>xxx.bak</code>。</p>
 <p>以上参数同样支持通过追加命令行参数的方式进行设置:</p>
-<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+<pre><code class="language-bash">python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data \
     -o CheckDataset.split.enable=True \
@@ -382,16 +380,16 @@ CheckDataset:
 </code></pre></details>
 
 ### 4.2 模型训练
-一条命令即可完成模型的训练,以此处旋转目标检测模型 `PP-YOLOE-R_L` 的训练为例:
+一条命令即可完成模型的训练,以此处旋转目标检测模型 `PP-YOLOE-R-L` 的训练为例:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-R_L.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md))
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-R-L.yaml`,训练其他模型时,需要的指定相应的配置文件,模型和配置的文件的对应关系,可以查阅[PaddleX模型列表(CPU/GPU)](../../../support_list/models_list.md))
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -416,13 +414,13 @@ python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,一条命令即可完成模型的评估:
 
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
 ```
 与模型训练类似,需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-R_L.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-R-L.yaml`)
 * 指定模式为模型评估:`-o Global.mode=evaluate`
 * 指定验证数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Evaluate`下的字段来进行设置,详细请参考[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -439,14 +437,14 @@ python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml \
 
 * 通过命令行的方式进行推理预测,只需如下一条命令。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/rotated_object_detection_001.png)到本地。
 ```bash
-python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R_L.yaml  \
+python main.py -c paddlex/configs/rotated_object_detection/PP-YOLOE-R-L.yaml  \
     -o Global.mode=predict \
     -o Predict.model_dir="./output/best_model/inference" \
     -o Predict.input="rotated_object_detection_001.png"
 ```
 与模型训练和评估类似,需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-R_L.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-R-L.yaml`)
 * 指定模式为模型推理预测:`-o Global.mode=predict`
 * 指定模型权重路径:`-o Predict.model_dir="./output/best_model/inference"`
 * 指定输入数据路径:`-o Predict.input="..."`

+ 3 - 3
docs/module_usage/tutorials/cv_modules/semantic_segmentation.md

@@ -203,8 +203,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model = create_model("PP-LiteSeg-T")
-output = model.predict("general_semantic_segmentation_002.png", batch_size=1)
+model = create_model(model_name="PP-LiteSeg-T")
+output = model.predict(input="general_semantic_segmentation_002.png", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -221,7 +221,7 @@ for res in output:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/semantic_segmentation/general_semantic_segmentation_002_res.png">
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/semanticseg/general_semantic_segmentation_002_res.png">
 
 相关方法、参数等说明如下:
 

+ 8 - 10
docs/module_usage/tutorials/cv_modules/small_object_detection.md

@@ -57,9 +57,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model_name = "PP-YOLOE_plus_SOD-S"
-model = create_model(model_name)
-output = model.predict("small_object_detection.jpg", batch_size=1)
+model = create_model(model_name="PP-YOLOE_plus_SOD-S")
+output = model.predict(input="small_object_detection.jpg", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -80,8 +79,7 @@ for res in output:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/small_object_detection_res.jpg">
-
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/small_object_detection_res.jpg">
 
 相关方法、参数等说明如下:
 
@@ -113,7 +111,7 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float/None/dict</code></td>
+<td><code>float/None/dict[int, float]</code></td>
 <td>无</td>
 <td>None</td>
 </tr>
@@ -123,7 +121,7 @@ for res in output:
 
 * `threshold`为低分object过滤阈值,默认为None,表示使用上一层设置,参数设置的优先级从高到低为:`predict参数传入 > create_model初始化传入 > yaml配置文件设置`。目前支持float和dict两种阈值设置方式:
   * `float`, 对于所有的类别使用同一个阈值。
-  * `dict`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。
+  * `dict[int, float]`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。
 
 * 调用小目标检测模型的 `predict()` 方法进行推理预测,`predict()` 方法参数有 `input` 、 `batch_size` 和 `threshold`,具体说明如下:
 
@@ -162,12 +160,12 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float</code>/<code>dict</code>/<code>None</code></td>
+<td><code>float</code>/<code>dict[int, float]</code>/<code>None</code></td>
 <td>
 <ul>
   <li><b>None</b>,表示沿用上一层设置, 参数设置优先级从高到低为: <code>predict参数传入 > create_model初始化传入 > yaml配置文件设置</code></li>
-  <li><b>float</b>,如0.5,表示推理时使用<code>0.5</code>作为所有类别低分object过滤阈值</li>
-  <li><b>dict</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。</li>
+  <li><b>float</b>,对于所有的类别使用同一个阈值。如0.5,表示推理时使用0.5作为所有类别低分object过滤阈值</li>
+  <li><b>dict[int, float]</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。</li>
 </ul>
 </td>
 <td>None</td>

+ 8 - 10
docs/module_usage/tutorials/cv_modules/vehicle_detection.md

@@ -45,9 +45,8 @@ comments: true
 
 ```python
 from paddlex import create_model
-model_name = "PP-YOLOE-S_vehicle"
-model = create_model(model_name)
-output = model.predict("vehicle_detection.jpg", batch_size=1)
+model = create_model(model_name="PP-YOLOE-S_vehicle")
+output = model.predict(input="vehicle_detection.jpg", batch_size=1)
 for res in output:
     res.print()
     res.save_to_img("./output/")
@@ -68,8 +67,7 @@ for res in output:
 
 可视化图片如下:
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/vehicle_detection/vehicle_detection_res.jpg">
-
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/vehicle_det/vehicle_detection_res.jpg">
 
 相关方法、参数等说明如下:
 
@@ -101,7 +99,7 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float/None/dict</code></td>
+<td><code>float/None/dict[int, float]</code></td>
 <td>无</td>
 <td>None</td>
 </tr>
@@ -111,7 +109,7 @@ for res in output:
 
 * `threshold`为低分object过滤阈值,默认为None,表示使用上一层设置,参数设置的优先级从高到低为:`predict参数传入 > create_model初始化传入 > yaml配置文件设置`。目前支持float和dict两种阈值设置方式:
   * `float`, 对于所有的类别使用同一个阈值。
-  * `dict`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。车辆检测为单类别检测,无需此设置。
+  * `dict[int, float]`, key为类别ID,value为阈值,对于不同的类别使用不同的阈值。车辆检测为单类别检测,无需此设置。
 
 * 调用车辆检测模型的 `predict()` 方法进行推理预测,`predict()` 方法参数有 `input` 、 `batch_size` 和 `threshold`,具体说明如下:
 
@@ -150,12 +148,12 @@ for res in output:
 <tr>
 <td><code>threshold</code></td>
 <td>低分object过滤阈值</td>
-<td><code>float</code>/<code>dict</code>/<code>None</code></td>
+<td><code>float</code>/<code>dict[int, float]</code>/<code>None</code></td>
 <td>
 <ul>
   <li><b>None</b>,表示沿用上一层设置, 参数设置优先级从高到低为: <code>predict参数传入 > create_model初始化传入 > yaml配置文件设置</code></li>
-  <li><b>float</b>,如0.5,表示推理时使用<code>0.5</code>作为低分object过滤阈值</li>
-  <li><b>dict</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。车辆检测为单类别检测,无需此设置。</li>
+  <li><b>float</b>,对于所有的类别使用同一个阈值。如0.5,表示推理时使用0.5作为所有类别的低分object过滤阈值</li>
+  <li><b>dict[int, float]</b>,如<code>{0: 0.5, 1: 0.35}</code>,表示推理时对类别0使用0.5低分过滤阈值,对类别1使用0.35低分过滤阈值。车辆检测为单类别检测,无需此设置。</li>
 </ul>
 </td>
 <td>None</td>

+ 3 - 3
docs/practical_tutorials/small_object_detection_tutorial.en.md

@@ -30,7 +30,7 @@ PaddleX provides the following quick experience methods, which can be directly e
   The quick experience produces the following inference result example:
   <center>
 
-  <img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/01.png" width=600>
+  <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/02.png" width=600>
 
   </center>
 
@@ -154,7 +154,7 @@ In the above validation result, `check_pass` being `True` indicates that the dat
 In addition, the dataset validation also analyzed the sample number distribution of all categories in the dataset and plotted the distribution histogram (`histogram.png`):
 <center>
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/02.png" width=600>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/03.png" width=600>
 
 </center>
 
@@ -276,7 +276,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-large
 Through the above, the prediction results can be generated under `./output`, as follows:
 <center>
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/03.png" width="600"/>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/04.png" width="600"/>
 
 </center>
 

+ 3 - 3
docs/practical_tutorials/small_object_detection_tutorial.md

@@ -35,7 +35,7 @@ PaddleX 提供了以下快速体验的方式,可以直接通过 PaddleX wheel
   快速体验产出推理结果示例:
   <center>
 
-  <img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/01.png" width=600>
+  <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/02.png" width=600>
 
   </center>
 
@@ -160,7 +160,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-large
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 <center>
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/02.png" width=600>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/03.png" width=600>
 
 </center>
 
@@ -283,7 +283,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-large
 通过上述可在`./output`下生成预测结果,预测结果如下:
 <center>
 
-<img src="https://raw.githubusercontent.com/BluebirdStory/PaddleX_doc_images/main/images/modules/small_obj_det/03.png" width="600"/>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/smallobj_det/04.png" width="600"/>
 
 </center>
 

+ 1 - 1
paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R_L.yaml → paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml

@@ -1,5 +1,5 @@
 Global:
-  model: PP-YOLOE-R_L
+  model: PP-YOLOE-R-L
   mode: check_dataset # check_dataset/train/evaluate/predict
   dataset_dir: "dataset/rdet_dota_examples"
   device: gpu:0,1,2,3

+ 1 - 1
paddlex/configs/pipelines/rotated_object_detection.yaml

@@ -4,7 +4,7 @@ pipeline_name: rotated_object_detection
 SubModules:
   RotatedObjectDetection:
     module_name: rotated_object_detection
-    model_name: PP-YOLOE-R_L
+    model_name: PP-YOLOE-R-L
     model_dir: null
     batch_size: 1
     threshold: 0.5

+ 1 - 1
paddlex/inference/utils/official_models.py

@@ -303,7 +303,7 @@ PP-LCNet_x1_0_vehicle_attribute_infer.tar",
     "PP-YOLOE_plus-S_face": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-S_face_infer.tar",
     "MobileFaceNet": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/MobileFaceNet_infer.tar",
     "ResNet50_face": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/ResNet50_face_infer.tar",
-    "PP-YOLOE-R_L": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE-R_L_infer.tar",
+    "PP-YOLOE-R-L": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE-R-L_infer.tar",
     "Co-Deformable-DETR-R50": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/Co-Deformable-DETR-R50_infer.tar",
     "Co-Deformable-DETR-Swin-T": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/Co-Deformable-DETR-Swin-T_infer.tar",
     "Co-DINO-R50": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/Co-DINO-R50_infer.tar",

+ 1 - 1
paddlex/modules/object_detection/model_list.py

@@ -71,7 +71,7 @@ MODELS = [
     "BlazeFace",
     "BlazeFace-FPN-SSH",
     "PP-YOLOE_plus-S_face",
-    "PP-YOLOE-R_L",
+    "PP-YOLOE-R-L",
     "Co-Deformable-DETR-R50",
     "Co-Deformable-DETR-Swin-T",
     "Co-DINO-R50",

+ 2 - 2
paddlex/repo_apis/PaddleDetection_api/object_det/register.py

@@ -925,9 +925,9 @@ register_model_info(
 
 register_model_info(
     {
-        "model_name": "PP-YOLOE-R_L",
+        "model_name": "PP-YOLOE-R-L",
         "suite": "Det",
-        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE-R_L.yaml"),
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE-R-L.yaml"),
         "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
         "supported_dataset_types": ["COCODetDataset"],
         "supported_train_opts": {