Kaynağa Gözat

prepare for 3.0b2 (#2432)

* prepare for 3.0b2

* precommit all files
cuicheng01 1 yıl önce
ebeveyn
işleme
cae6039e6e
54 değiştirilmiş dosya ile 464 ekleme ve 393 silme
  1. 4 4
      .github/ISSUE_TEMPLATE/1_data.md
  2. 5 5
      .github/ISSUE_TEMPLATE/2--paddlex-api----.md
  3. 19 15
      .github/ISSUE_TEMPLATE/3_deploy.md
  4. 8 12
      .github/ISSUE_TEMPLATE/4_gui.md
  5. 5 5
      .github/ISSUE_TEMPLATE/5_other.md
  6. 14 2
      .github/ISSUE_TEMPLATE/6_hardware_contribute.md
  7. 0 13
      .github/pull_request_template.md
  8. 2 2
      README.md
  9. 1 1
      README_en.md
  10. 1 1
      docs/installation/installation.en.md
  11. 1 1
      docs/installation/installation.md
  12. 1 1
      paddlex/.version
  13. 0 2
      paddlex/__main__.py
  14. 1 1
      paddlex/inference/models/object_detection.py
  15. 7 2
      paddlex/inference/pipelines/ppchatocrv3/ppchatocrv3.py
  16. 3 3
      paddlex/inference/utils/official_models.py
  17. 3 1
      paddlex/modules/anomaly_detection/dataset_checker/dataset_src/check_dataset.py
  18. 34 14
      paddlex/modules/anomaly_detection/dataset_checker/dataset_src/convert_dataset.py
  19. 1 3
      paddlex/modules/anomaly_detection/model_list.py
  20. 1 0
      paddlex/modules/face_recognition/dataset_checker/dataset_src/check_dataset.py
  21. 1 4
      paddlex/modules/face_recognition/model_list.py
  22. 4 2
      paddlex/modules/face_recognition/trainer.py
  23. 1 1
      paddlex/modules/general_recognition/dataset_checker/__init__.py
  24. 16 10
      paddlex/modules/general_recognition/dataset_checker/dataset_src/analyse_dataset.py
  25. 4 3
      paddlex/modules/general_recognition/dataset_checker/dataset_src/convert_dataset.py
  26. 1 3
      paddlex/modules/general_recognition/dataset_checker/dataset_src/utils/visualizer.py
  27. 5 3
      paddlex/modules/general_recognition/evaluator.py
  28. 1 1
      paddlex/modules/general_recognition/exportor.py
  29. 3 4
      paddlex/modules/general_recognition/model_list.py
  30. 3 1
      paddlex/modules/general_recognition/trainer.py
  31. 2 2
      paddlex/modules/image_classification/dataset_checker/dataset_src/convert_dataset.py
  32. 0 1
      paddlex/modules/image_unwarping/model_list.py
  33. 12 7
      paddlex/modules/object_detection/dataset_checker/dataset_src/utils/visualizer.py
  34. 1 1
      paddlex/modules/object_detection/model_list.py
  35. 0 1
      paddlex/repo_apis/PaddleClas_api/cls/__init__.py
  36. 1 1
      paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_conv_large.yaml
  37. 1 1
      paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_conv_medium.yaml
  38. 1 1
      paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_conv_small.yaml
  39. 1 1
      paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_hybrid_large.yaml
  40. 1 1
      paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_hybrid_medium.yaml
  41. 1 1
      paddlex/repo_apis/PaddleClas_api/configs/PP-LCNet_x1_0_doc_ori.yaml
  42. 2 6
      paddlex/repo_apis/PaddleClas_api/shitu_rec/config.py
  43. 3 9
      paddlex/repo_apis/PaddleClas_api/shitu_rec/register.py
  44. 1 0
      paddlex/repo_apis/PaddleClas_api/shitu_rec/runner.py
  45. 1 1
      paddlex/repo_apis/PaddleDetection_api/configs/CenterNet-ResNet50.yaml
  46. 1 1
      paddlex/repo_apis/PaddleDetection_api/configs/FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml
  47. 1 1
      paddlex/repo_apis/PaddleDetection_api/configs/PicoDet-S_layout_3cls.yaml
  48. 9 5
      paddlex/repo_apis/PaddleDetection_api/object_det/config.py
  49. 14 0
      paddlex/repo_apis/PaddleDetection_api/object_det/official_categories.py
  50. 231 227
      paddlex/repo_apis/PaddleDetection_api/object_det/register.py
  51. 0 1
      paddlex/repo_apis/PaddleSeg_api/configs/STFPM.yaml
  52. 13 0
      paddlex/repo_apis/__init__.py
  53. 4 4
      paddlex/repo_manager/meta.py
  54. 13 0
      paddlex/utils/__init__.py

+ 4 - 4
.github/ISSUE_TEMPLATE/1_data.md

@@ -9,10 +9,10 @@ assignees: ''
 
 ## Checklist:
 
-1. 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
-2. 翻阅[FAQ常见问题汇总和答疑](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md)
-3. 确认bug是否在新版本里还未修复
-4. 翻阅[PaddleX数据准备文档](https://github.com/PaddlePaddle/PaddleX/tree/develop#2-%E6%95%B0%E6%8D%AE%E5%87%86%E5%A4%87)
+- [ ] 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
+- [ ] 翻阅[FAQ](https://paddlepaddle.github.io/PaddleX/main/FAQ.html)
+- [ ] 翻阅[PaddleX 文档](https://paddlepaddle.github.io/PaddleX/main/index.html)
+- [ ] 确认bug是否在新版本里还未修复
 
 ## 描述问题
 

+ 5 - 5
.github/ISSUE_TEMPLATE/2--paddlex-api----.md

@@ -9,16 +9,16 @@ assignees: ''
 
 ## Checklist:
 
-1. 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
-2. 翻阅[FAQ常见问题汇总和答疑](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md)
-3. 确认bug是否在新版本里还未修复
-4. 翻阅[PaddleX API文档说明](https://github.com/PaddlePaddle/PaddleX/tree/develop#paddlex-%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3)
+- [ ] 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
+- [ ] 翻阅[FAQ](https://paddlepaddle.github.io/PaddleX/main/FAQ.html)
+- [ ] 翻阅[PaddleX 文档](https://paddlepaddle.github.io/PaddleX/main/index.html)
+- [ ] 确认bug是否在新版本里还未修复
 
 ## 描述问题
 
 ## 复现
 
-1. 您是否已经正常运行我们提供的[教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/tutorials)?
+1. 您是否已经正常运行我们提供的[教程](https://paddlepaddle.github.io/PaddleX/main/index.html)?
 
 2. 您是否在教程的基础上修改代码内容?还请您提供运行的代码
 

+ 19 - 15
.github/ISSUE_TEMPLATE/3_deploy.md

@@ -1,6 +1,6 @@
 ---
 name: 3. 模型部署
-about: 模型部署相关问题,包括C++、Python、C#部署等
+about: 模型部署相关问题,包括高性能推理、服务化部署、端侧部署等
 title: ''
 labels: ''
 assignees: ''
@@ -9,28 +9,34 @@ assignees: ''
 
 ## Checklist:
 
-1. 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
-2. 翻阅[FAQ常见问题汇总和答疑](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md)
-3. 确认bug是否在新版本里还未修复
-4. 翻阅[PaddleX 部署文档说明](https://github.com/PaddlePaddle/PaddleX/tree/develop#5-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
+- [ ] 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
+- [ ] 翻阅[FAQ](https://paddlepaddle.github.io/PaddleX/main/FAQ.html)
+- [ ] 翻阅[PaddleX 文档](https://paddlepaddle.github.io/PaddleX/main/index.html)
+- [ ] 确认bug是否在新版本里还未修复
 
 ## 描述问题
 
 ## 复现
 
-1. c++部署方式
+1. 高性能推理
 
-    * 您是否按照文档教程已经正常运行我们提供的[demo](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/demo)
+    * 您是否完全按照[高性能推理文档教程](https://paddlepaddle.github.io/PaddleX/main/pipeline_deploy/high_performance_inference.html)跑通了流程
 
-    * 您是否在demo基础上修改代码内容?还请您提供运行的代码
+    * 您使用的是离线激活方式还是在线激活方式?
 
-2. c#部署方式
+2. 服务化部署
 
-    * 您是否按照文档教程已经正常运行我们提供的[demo](https://github.com/PaddlePaddle/PaddleX/tree/develop/examples/C%23_deploy)
+    * 您是否完全按照[服务化部署文档教程](https://paddlepaddle.github.io/PaddleX/main/pipeline_deploy/service_deploy.html)跑通了流程
 
-    * 您是否在demo基础上修改代码内容?还请您提供运行的代码
+    * 您在服务化部署中是否有使用高性能推理插件,如果是,您使用的是离线激活方式还是在线激活方式?
+
+    * 如果是多语言调用的问题,请给出调用示例子。
+
+3. 端侧部署
+    * 您是否完全按照[端侧部署文档教程](https://paddlepaddle.github.io/PaddleX/main/pipeline_deploy/edge_deploy.html)跑通了流程?
+
+    * 您使用的端侧设备是?对应的PaddlePaddle版本和PaddleLite版本分别是什么?
 
-    * 如果c# demo无法正常运行,c++ [demo](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/demo)是否已经正常运行?
 
 3. 您使用的**模型**和**数据集**是?
 
@@ -38,9 +44,7 @@ assignees: ''
 
 ## 环境
 
-1. 如果您使用的是python部署方式,请提供您使用的PaddlePaddle、PaddleX版本号、Python版本号
-
-2. 如果您使用的是c++或c#部署方式,请提供您使用的PaddleX分支、推理引擎(例如PaddleInference)版本号
+1. 请提供您使用的PaddlePaddle、PaddleX版本号、Python版本号
 
 3. 请提供您使用的操作系统信息,如Linux/Windows/MacOS
 

+ 8 - 12
.github/ISSUE_TEMPLATE/4_gui.md

@@ -1,6 +1,6 @@
 ---
-name: 4. PaddleX可视化客户端使用问题
-about: PaddleX可视化客户端使用问题
+name: 4. 星河零代码产线使用问题
+about: 星河零代码产线使用问题
 title: ''
 labels: ''
 assignees: ''
@@ -9,19 +9,15 @@ assignees: ''
 
 ## Checklist:
 
-1. 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
-2. 翻阅[FAQ常见问题汇总和答疑](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md)
-3. 确认bug是否在新版本里还未修复
-4. 如果bug是由PaddleX API 2.0导致,且该bug在develop分支里已修复,参考[FAQ Q4](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md#gui%E7%9B%B8%E5%85%B3%E9%97%AE%E9%A2%98)替换内置PaddleX API
+- [ ] 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
+- [ ] 翻阅[FAQ](https://paddlepaddle.github.io/PaddleX/main/FAQ.html)
+- [ ] 翻阅[PaddleX 文档](https://paddlepaddle.github.io/PaddleX/main/index.html)
+- [ ] 如果是数据校验问题,请确保在开源PaddleX中可以通过数据校验
 
 ## 描述问题
 
 ## 复现
 
-1. 请提供您出现的报错信息及相关log(log的查找见 [FAQ Q2](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md#gui%E7%9B%B8%E5%85%B3%E9%97%AE%E9%A2%98))
+1. 请提供您出现的报错信息及相关log
 
-2. 请提供您使用的GUI版本号
-
-3. 请提供您使用的操作系统信息,如Linux/Windows/MacOS
-
-4. 请问您使用的CUDA/cuDNN的版本号是?
+2. 请提供您的星河uid和产线id

+ 5 - 5
.github/ISSUE_TEMPLATE/5_other.md

@@ -9,16 +9,16 @@ assignees: ''
 
 ## Checklist:
 
-1. 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
-2. 翻阅[FAQ常见问题汇总和答疑](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md)
-3. 确认bug是否在新版本里还未修复
-4. 翻阅[PaddleX 使用文档](https://github.com/PaddlePaddle/PaddleX/tree/develop#paddlex-%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3)
+- [ ] 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
+- [ ] 翻阅[FAQ](https://paddlepaddle.github.io/PaddleX/main/FAQ.html)
+- [ ] 翻阅[PaddleX 文档](https://paddlepaddle.github.io/PaddleX/main/index.html)
+- [ ] 确认bug是否在新版本里还未修复
 
 ## 描述问题
 
 ## 复现
 
-1. 您是否已经正常运行我们提供的[教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/tutorials)?
+1. 您是否已经正常运行我们提供的[教程](https://paddlepaddle.github.io/PaddleX/main/index.html)?
 
 2. 您是否在教程的基础上修改代码内容?还请您提供运行的代码
 

+ 14 - 2
.github/ISSUE_TEMPLATE/6_hardware_contribute.md

@@ -1,6 +1,18 @@
-# 说明
+---
+name: 6. 新硬件贡献
+about: 请描述您准备贡献的模型与芯片信息
+title: ''
+labels: ''
+assignees: ''
+
+---
+
+## Checklist:
+
+- [ ] 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
+- [ ] 翻阅[FAQ](https://paddlepaddle.github.io/PaddleX/main/FAQ.html)
+- [ ] 翻阅[PaddleX 文档](https://paddlepaddle.github.io/PaddleX/main/index.html)
 
-1. 请描述您准备贡献的模型与芯片信息
 
 # 环境
 

+ 0 - 13
.github/pull_request_template.md

@@ -1,13 +0,0 @@
-## 此为PR说明模版
-
-如提交的为部署C++代码,请确认以下自测点是否完成(完成勾选即可),并保留以下的自测点列表,便于Reviewer知晓。
-
-- [ ] Linux下测试通过
-- - [ ] batch = 1 预测
-- - [ ] batch > 1预测
-- - [ ] 多GPU卡预测
-- - [ ] TensorRT预测
-- - [ ] Triton预测
-- [ ] Windows下测试通过
-- - [ ] batch = 1预测
-- - [ ] batch > 1预测

+ 2 - 2
README.md

@@ -358,7 +358,7 @@ PaddleX的各个产线均支持本地**快速推理**,部分模型支持在[AI
 
 ### 🛠️ 安装
 
-> ❗安装 PaddleX 前请先确保您有基础的 **Python 运行环境**(注:当前支持Python 3.8 ~ Python 3.10下运行,更多Python版本适配中)。
+> ❗安装 PaddleX 前请先确保您有基础的 **Python 运行环境**(注:当前支持Python 3.8 ~ Python 3.10下运行,更多Python版本适配中)。PaddleX 3.0-beta2 版本依赖的 PaddlePaddle 版本为 3.0.0b2。
 
 * **安装 PaddlePaddle**
 ```bash
@@ -377,7 +377,7 @@ python -m pip install paddlepaddle-gpu==3.0.0b2 -i https://www.paddlepaddle.org.
 * **安装PaddleX**
 
 ```bash
-pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b1-py3-none-any.whl
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b2-py3-none-any.whl
 ```
 
 > ❗ 更多安装方式参考 [PaddleX 安装教程](https://paddlepaddle.github.io/PaddleX/latest/installation/installation.html)

+ 1 - 1
README_en.md

@@ -355,7 +355,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
 
 ### 🛠️ Installation
 
-> ❗Before installing PaddleX, please ensure you have a basic **Python environment** (Note: Currently supports Python 3.8 to Python 3.10, with more Python versions being adapted).
+> ❗Before installing PaddleX, please ensure you have a basic **Python environment** (Note: Currently supports Python 3.8 to Python 3.10, with more Python versions being adapted). The PaddleX 3.0-beta2 version depends on PaddlePaddle version 3.0.0b2.
 
 * **Installing PaddlePaddle**
 

+ 1 - 1
docs/installation/installation.en.md

@@ -17,7 +17,7 @@ After installing PaddlePaddle (refer to the [PaddlePaddle Local Installation Tut
 > ❗ <b>Note</b>: Please ensure that PaddlePaddle is successfully installed before proceeding to the next step.
 
 ```bash
-pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b1-py3-none-any.whl
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b2-py3-none-any.whl
 ```
 
 ### 1.2 Plugin Installation Mode

+ 1 - 1
docs/installation/installation.md

@@ -19,7 +19,7 @@ PaddleX为您提供了两种安装模式:<b>Wheel包安装</b>和<b>插件安
 > ❗ 注:请务必保证 PaddlePaddle 安装成功,安装成功后,方可进行下一步。
 
 ```bash
-pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b1-py3-none-any.whl
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b2-py3-none-any.whl
 ```
 ### 1.2 插件安装模式
 若您使用PaddleX的应用场景为<b>二次开发</b> (例如重新训练模型、微调模型、自定义模型结构、自定义推理代码等),那么推荐您使用<b>功能更加强大</b>的插件安装模式。

+ 1 - 1
paddlex/.version

@@ -1 +1 @@
-3.0.0.beta1
+3.0.0.beta2

+ 0 - 2
paddlex/__main__.py

@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
 # copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");

+ 1 - 1
paddlex/inference/models/object_detection.py

@@ -60,7 +60,7 @@ class DetPredictor(BasicPredictor):
                     "img_size": "img_size",
                 }
             )
-        
+
         self._add_component(
             [
                 predictor,

+ 7 - 2
paddlex/inference/pipelines/ppchatocrv3/ppchatocrv3.py

@@ -331,8 +331,13 @@ class PPChatOCRPipeline(_TableRecPipeline):
             if isinstance(all_curve_res, dict):
                 all_curve_res = [all_curve_res]
             for sub, curve_res in zip(curve_subs, all_curve_res):
-                dt_polys_list = [list(map(list, sublist)) for sublist in curve_res["dt_polys"]]
-                sorted_items = sorted(zip(dt_polys_list, curve_res["rec_text"]), key=lambda x: (x[0][0][1], x[0][0][0]))
+                dt_polys_list = [
+                    list(map(list, sublist)) for sublist in curve_res["dt_polys"]
+                ]
+                sorted_items = sorted(
+                    zip(dt_polys_list, curve_res["rec_text"]),
+                    key=lambda x: (x[0][0][1], x[0][0][0]),
+                )
                 _, sorted_text = zip(*sorted_items)
                 structure_res.append(
                     {

+ 3 - 3
paddlex/inference/utils/official_models.py

@@ -261,11 +261,11 @@ PP-LCNet_x1_0_vehicle_attribute_infer.tar",
     "RT-DETR-H_layout_3cls": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/RT-DETR-H_layout_3cls_infer.tar",
     "RT-DETR-H_layout_17cls": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/RT-DETR-H_layout_17cls_infer.tar",
     "PicoDet_LCNet_x2_5_face": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/PicoDet_LCNet_x2_5_face_infer.tar",
-    "BlazeFace": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/BlazeFace_infer.tar", 
-    "BlazeFace-FPN-SSH": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/BlazeFace-FPN-SSH_infer.tar", 
+    "BlazeFace": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/BlazeFace_infer.tar",
+    "BlazeFace-FPN-SSH": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/BlazeFace-FPN-SSH_infer.tar",
     "PP-YOLOE_plus-S_face": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/PP-YOLOE_plus-S_face_infer.tar",
     "MobileFaceNet": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/MobileFaceNet_infer.tar",
-    "ResNet50_face": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/ResNet50_face_infer.tar"
+    "ResNet50_face": "https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b1_v2/ResNet50_face_infer.tar",
 }
 
 

+ 3 - 1
paddlex/modules/anomaly_detection/dataset_checker/dataset_src/check_dataset.py

@@ -41,7 +41,9 @@ def check_dataset(dataset_dir, output, sample_num=10):
         mapping_file = osp.join(dataset_dir, f"{tag}.txt")
         if not osp.exists(mapping_file):
             info(f"The mapping file ({mapping_file}) doesn't exist, ignored.")
-            info("If you are using MVTec_AD dataset, add args below in your training commands:")
+            info(
+                "If you are using MVTec_AD dataset, add args below in your training commands:"
+            )
             info("-o CheckDataset.convert.enable=True")
             info("-o CheckDataset.convert.src_dataset_type=MVTec_AD")
             continue

+ 34 - 14
paddlex/modules/anomaly_detection/dataset_checker/dataset_src/convert_dataset.py

@@ -27,6 +27,7 @@ from .....utils.file_interface import custom_open
 from .....utils import logging
 from .....utils.logging import info
 
+
 def convert_dataset(dataset_type, input_dir):
     """convert to paddlex official format"""
     if dataset_type == "LabelMe":
@@ -166,7 +167,7 @@ def polygon2mask(img_size, points):
 
 def save_item_to_txt(items, file_path):
     try:
-        with open(file_path, 'a') as file:
+        with open(file_path, "a") as file:
             file.write(items)
         file.close()
     except Exception as e:
@@ -177,34 +178,53 @@ def save_training_txt(cls_root, mode, cat):
     imgs = os.listdir(os.path.join(cls_root, mode, cat))
     imgs.sort()
     for img in imgs:
-        if mode == 'train':
+        if mode == "train":
             item = os.path.join(cls_root, mode, cat, img)
-            items = item + ' ' + item + '\n'
-            save_item_to_txt(items, os.path.join(cls_root, 'train.txt'))
-        elif mode == 'test' and cat != 'good':
+            items = item + " " + item + "\n"
+            save_item_to_txt(items, os.path.join(cls_root, "train.txt"))
+        elif mode == "test" and cat != "good":
             item1 = os.path.join(cls_root, mode, cat, img)
-            item2 = os.path.join(cls_root, 'ground_truth', cat, img.split('.')[0]+'_mask.png')
-            items = item1 + ' ' + item2 + '\n'
-            save_item_to_txt(items, os.path.join(cls_root, 'val.txt'))
+            item2 = os.path.join(
+                cls_root, "ground_truth", cat, img.split(".")[0] + "_mask.png"
+            )
+            items = item1 + " " + item2 + "\n"
+            save_item_to_txt(items, os.path.join(cls_root, "val.txt"))
 
 
 def check_old_txt(cls_pth, mode):
-    set_name = 'train.txt' if mode == 'train' else 'val.txt'
+    set_name = "train.txt" if mode == "train" else "val.txt"
     pth = os.path.join(cls_pth, set_name)
     if os.path.exists(pth):
         os.remove(pth)
 
 
 def convert_mvtec_dataset(input_dir):
-    classes =  ['bottle', 'cable', 'capsule', 'hazelnut', 'metal_nut', 'pill', 'screw',
-    'toothbrush', 'transistor', 'zipper', 'carpet', 'grid', 'leather', 'tile', 'wood']
+    classes = [
+        "bottle",
+        "cable",
+        "capsule",
+        "hazelnut",
+        "metal_nut",
+        "pill",
+        "screw",
+        "toothbrush",
+        "transistor",
+        "zipper",
+        "carpet",
+        "grid",
+        "leather",
+        "tile",
+        "wood",
+    ]
     clas = os.path.split(input_dir)[-1]
-    assert clas in classes, info(f"Make sure your class: '{clas}' in your dataset root in\n {classes}")
-    modes = ['train', 'test']
+    assert clas in classes, info(
+        f"Make sure your class: '{clas}' in your dataset root in\n {classes}"
+    )
+    modes = ["train", "test"]
     cls_root = input_dir
     for mode in modes:
         check_old_txt(cls_root, mode)
         cats = os.listdir(os.path.join(cls_root, mode))
         for cat in cats:
             save_training_txt(cls_root, mode, cat)
-    info(f"Add train.txt/val.txt successfully for {input_dir}")
+    info(f"Add train.txt/val.txt successfully for {input_dir}")

+ 1 - 3
paddlex/modules/anomaly_detection/model_list.py

@@ -13,6 +13,4 @@
 # limitations under the License.
 
 
-MODELS = [
-    "STFPM"
-]
+MODELS = ["STFPM"]

+ 1 - 0
paddlex/modules/face_recognition/dataset_checker/dataset_src/check_dataset.py

@@ -99,6 +99,7 @@ def check_train(dataset_dir, output, sample_num=10):
     attrs["train_sample_paths"] = sample_paths
     return attrs
 
+
 def check_val(dataset_dir, output, sample_num=10):
     """check dataset"""
     dataset_dir = osp.abspath(dataset_dir)

+ 1 - 4
paddlex/modules/face_recognition/model_list.py

@@ -12,7 +12,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-MODELS = [
-    "MobileFaceNet",
-    "ResNet50_face"
-]
+MODELS = ["MobileFaceNet", "ResNet50_face"]

+ 4 - 2
paddlex/modules/face_recognition/trainer.py

@@ -55,9 +55,11 @@ class FaceRecTrainer(ClsTrainer):
             self.pdx_config.update_warmup_epochs(self.train_config.warmup_steps)
         if self.global_config.output is not None:
             self.pdx_config._update_output_dir(self.global_config.output)
-    
+
     def update_dataset_cfg(self):
-        train_dataset_dir = abspath(os.path.join(self.global_config.dataset_dir, "train"))
+        train_dataset_dir = abspath(
+            os.path.join(self.global_config.dataset_dir, "train")
+        )
         val_dataset_dir = abspath(os.path.join(self.global_config.dataset_dir, "val"))
         train_list_path = abspath(os.path.join(train_dataset_dir, "label.txt"))
         val_list_path = abspath(os.path.join(val_dataset_dir, "pair_label.txt"))

+ 1 - 1
paddlex/modules/general_recognition/dataset_checker/__init__.py

@@ -104,4 +104,4 @@ class ShiTuRecDatasetChecker(BaseDatasetChecker):
         Returns:
             str: dataset type
         """
-        return "ShiTuRecDataset"
+        return "ShiTuRecDataset"

+ 16 - 10
paddlex/modules/general_recognition/dataset_checker/dataset_src/analyse_dataset.py

@@ -46,8 +46,8 @@ def deep_analyse(dataset_path, output, dataset_type="ShiTuRec"):
         }
 
     categories = list(tags_info.keys())
-    num_images = [tags_info[category]['num_images'] for category in categories]
-    num_labels = [tags_info[category]['num_labels'] for category in categories]
+    num_images = [tags_info[category]["num_images"] for category in categories]
+    num_labels = [tags_info[category]["num_labels"] for category in categories]
 
     # bar
     os_system = platform.system().lower()
@@ -60,13 +60,16 @@ def deep_analyse(dataset_path, output, dataset_type="ShiTuRec"):
     width = 0.35  # 每个条形的宽度
 
     fig, ax = plt.subplots()
-    rects1 = ax.bar(x - width/2, num_images, width, label="Num Images")
-    rects2 = ax.bar(x + width/2, num_labels, width, label="Num Classes")
+    rects1 = ax.bar(x - width / 2, num_images, width, label="Num Images")
+    rects2 = ax.bar(x + width / 2, num_labels, width, label="Num Classes")
 
     # 添加一些文本标签
     ax.set_xlabel("集合", fontproperties=None if os_system == "windows" else font)
     ax.set_ylabel("数量", fontproperties=None if os_system == "windows" else font)
-    ax.set_title("不同集合的图片和类别数量", fontproperties=None if os_system == "windows" else font)
+    ax.set_title(
+        "不同集合的图片和类别数量",
+        fontproperties=None if os_system == "windows" else font,
+    )
     ax.set_xticks(x, fontproperties=None if os_system == "windows" else font)
     ax.set_xticklabels(categories)
     ax.legend()
@@ -76,11 +79,14 @@ def deep_analyse(dataset_path, output, dataset_type="ShiTuRec"):
         """Attach a text label above each bar in *rects*, displaying its height."""
         for rect in rects:
             height = rect.get_height()
-            ax.annotate('{}'.format(height),
-                        xy=(rect.get_x() + rect.get_width() / 2, height),
-                        xytext=(0, 3),  # 3 points vertical offset
-                        textcoords="offset points",
-                        ha="center", va="bottom")
+            ax.annotate(
+                "{}".format(height),
+                xy=(rect.get_x() + rect.get_width() / 2, height),
+                xytext=(0, 3),  # 3 points vertical offset
+                textcoords="offset points",
+                ha="center",
+                va="bottom",
+            )
 
     autolabel(rects1)
     autolabel(rects2)

+ 4 - 3
paddlex/modules/general_recognition/dataset_checker/dataset_src/convert_dataset.py

@@ -16,6 +16,7 @@
 import os
 import json
 from .....utils.file_interface import custom_open
+from .....utils.errors import ConvertFailedError
 
 
 def check_src_dataset(root_dir, dataset_type):
@@ -42,7 +43,7 @@ def convert(dataset_type, input_dir):
     """convert dataset to multilabel format"""
     # check format validity
     check_src_dataset(input_dir, dataset_type)
-    
+
     if dataset_type in ("LabelMe"):
         convert_labelme_dataset(input_dir)
     else:
@@ -59,7 +60,7 @@ def convert_labelme_dataset(root_dir):
     gallery_rate = 30
     query_rate = 20
     tags = ["train", "gallery", "query"]
-    label_dict = {}   
+    label_dict = {}
     image_files = []
 
     with custom_open(label_path, "r") as f:
@@ -76,7 +77,7 @@ def convert_labelme_dataset(root_dir):
             for label, value in data["flags"].items():
                 if value:
                     image_files.append(f"{image_path} {label_dict[label]}\n")
-    
+
     start = 0
     image_num = len(image_files)
     rate_list = [train_rate, gallery_rate, query_rate]

+ 1 - 3
paddlex/modules/general_recognition/dataset_checker/dataset_src/utils/visualizer.py

@@ -133,9 +133,7 @@ def draw_label(image, label):
     if tuple(map(int, PIL.__version__.split("."))) <= (10, 0, 0):
         text_width, text_height = draw.textsize(label, font)
     else:
-        left, top, right, bottom = draw.textbbox(
-            (0, 0), label, font
-        )
+        left, top, right, bottom = draw.textbbox((0, 0), label, font)
         text_width, text_height = right - left, bottom - top
 
     rect_left = 3

+ 5 - 3
paddlex/modules/general_recognition/evaluator.py

@@ -15,15 +15,17 @@
 from ..image_classification import ClsEvaluator
 from .model_list import MODELS
 
+
 class ShiTuRecEvaluator(ClsEvaluator):
     """ShiTu Recognition Model Evaluator"""
 
     entities = MODELS
-    
+
     def update_config(self):
         """update evalution config"""
         if self.eval_config.log_interval:
             self.pdx_config.update_log_interval(self.eval_config.log_interval)
-        self.pdx_config.update_dataset(self.global_config.dataset_dir, "ShiTuRecDataset")
+        self.pdx_config.update_dataset(
+            self.global_config.dataset_dir, "ShiTuRecDataset"
+        )
         self.pdx_config.update_pretrained_weights(self.eval_config.weight_path)
-

+ 1 - 1
paddlex/modules/general_recognition/exportor.py

@@ -18,5 +18,5 @@ from .model_list import MODELS
 
 class ShiTuRecExportor(ClsExportor):
     """ShiTu Recognition Model Exportor"""
-    
+
     entities = MODELS

+ 3 - 4
paddlex/modules/general_recognition/model_list.py

@@ -13,8 +13,7 @@
 # limitations under the License.
 
 MODELS = [
-"PP-ShiTuV2_rec",
-"PP-ShiTuV2_rec_CLIP_vit_base",
-"PP-ShiTuV2_rec_CLIP_vit_large"
+    "PP-ShiTuV2_rec",
+    "PP-ShiTuV2_rec_CLIP_vit_base",
+    "PP-ShiTuV2_rec_CLIP_vit_large",
 ]
-

+ 3 - 1
paddlex/modules/general_recognition/trainer.py

@@ -30,7 +30,9 @@ class ShiTuRecTrainer(ClsTrainer):
         if self.train_config.save_interval:
             self.pdx_config.update_save_interval(self.train_config.save_interval)
 
-        self.pdx_config.update_dataset(self.global_config.dataset_dir, "ShiTuRecDataset")
+        self.pdx_config.update_dataset(
+            self.global_config.dataset_dir, "ShiTuRecDataset"
+        )
         if self.train_config.num_classes is not None:
             self.pdx_config.update_num_classes(self.train_config.num_classes)
         if self.train_config.pretrain_weight_path != "":

+ 2 - 2
paddlex/modules/image_classification/dataset_checker/dataset_src/convert_dataset.py

@@ -1,10 +1,10 @@
-# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 #
-#     http://www.apache.org/licenses/LICENSE-2.0
+#    http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,

+ 0 - 1
paddlex/modules/image_unwarping/model_list.py

@@ -15,4 +15,3 @@
 MODELS = [
     "UVDoc",
 ]
-

+ 12 - 7
paddlex/modules/object_detection/dataset_checker/dataset_src/utils/visualizer.py

@@ -1,12 +1,17 @@
-# -*- coding: UTF-8 -*-
-################################################################################
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
 #
-# Copyright (c) 2024 Baidu.com, Inc. All Rights Reserved
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
 #
-################################################################################
-"""
-Author: PaddlePaddle Authors
-"""
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import os
 import numpy as np
 import json

+ 1 - 1
paddlex/modules/object_detection/model_list.py

@@ -69,5 +69,5 @@ MODELS = [
     "PicoDet_LCNet_x2_5_face",
     "BlazeFace",
     "BlazeFace-FPN-SSH",
-    "PP-YOLOE_plus-S_face"
+    "PP-YOLOE_plus-S_face",
 ]

+ 0 - 1
paddlex/repo_apis/PaddleClas_api/cls/__init__.py

@@ -17,4 +17,3 @@ from .model import ClsModel
 from .runner import ClsRunner
 from .config import ClsConfig
 from . import register
-

+ 1 - 1
paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_conv_large.yaml

@@ -178,4 +178,4 @@ Metric:
         topk: [1, 5]
   Eval:
     - TopkAcc:
-        topk: [1, 5]
+        topk: [1, 5]

+ 1 - 1
paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_conv_medium.yaml

@@ -178,4 +178,4 @@ Metric:
         topk: [1, 5]
   Eval:
     - TopkAcc:
-        topk: [1, 5]
+        topk: [1, 5]

+ 1 - 1
paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_conv_small.yaml

@@ -178,4 +178,4 @@ Metric:
         topk: [1, 5]
   Eval:
     - TopkAcc:
-        topk: [1, 5]
+        topk: [1, 5]

+ 1 - 1
paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_hybrid_large.yaml

@@ -178,4 +178,4 @@ Metric:
         topk: [1, 5]
   Eval:
     - TopkAcc:
-        topk: [1, 5]
+        topk: [1, 5]

+ 1 - 1
paddlex/repo_apis/PaddleClas_api/configs/MobileNetV4_hybrid_medium.yaml

@@ -172,4 +172,4 @@ Metric:
         topk: [1, 5]
   Eval:
     - TopkAcc:
-        topk: [1, 5]
+        topk: [1, 5]

+ 1 - 1
paddlex/repo_apis/PaddleClas_api/configs/PP-LCNet_x1_0_doc_ori.yaml

@@ -148,4 +148,4 @@ Metric:
         topk: [1]
   Eval:
     - TopkAcc:
-        topk: [1]
+        topk: [1]

+ 2 - 6
paddlex/repo_apis/PaddleClas_api/shitu_rec/config.py

@@ -11,7 +11,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
- 
+
 from ..cls import ClsConfig
 from ....utils.misc import abspath
 
@@ -44,7 +44,6 @@ class ShiTuRecConfig(ClsConfig):
         else:
             train_list_path = f"{dataset_path}/train.txt"
 
-
         ds_cfg = [
             f"DataLoader.Train.dataset.name={dataset_type}",
             f"DataLoader.Train.dataset.image_root={dataset_path}",
@@ -83,7 +82,6 @@ class ShiTuRecConfig(ClsConfig):
             raise ValueError("The input `mode` should be train or eval")
         self.update(_cfg)
 
-
     def update_num_classes(self, num_classes: int):
         """update classes number
 
@@ -93,7 +91,6 @@ class ShiTuRecConfig(ClsConfig):
         update_str_list = [f"Arch.Head.class_num={num_classes}"]
         self.update(update_str_list)
 
-
     def update_num_workers(self, num_workers: int):
         """update workers number of train and eval dataloader
 
@@ -135,11 +132,10 @@ class ShiTuRecConfig(ClsConfig):
         ]
         self.update(_cfg)
 
-
     def _get_backbone_name(self) -> str:
         """get backbone name of rec model
 
         Returns:
             str: the model backbone name, i.e., `Arch.Backbone.name` in config.
         """
-        return self.dict["Arch"]["Backbone"]["name"]
+        return self.dict["Arch"]["Backbone"]["name"]

+ 3 - 9
paddlex/repo_apis/PaddleClas_api/shitu_rec/register.py

@@ -38,9 +38,7 @@ register_model_info(
     {
         "model_name": "PP-ShiTuV2_rec",
         "suite": "ShiTuRec",
-        "config_path": osp.join(
-            PDX_CONFIG_DIR, "PP-ShiTuV2_rec.yaml"
-        ),
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-ShiTuV2_rec.yaml"),
         "supported_apis": ["train", "evaluate", "predict", "export"],
         "supported_dataset_types": ["ShiTuRecDataset"],
         "infer_config": None,
@@ -51,9 +49,7 @@ register_model_info(
     {
         "model_name": "PP-ShiTuV2_rec_CLIP_vit_base",
         "suite": "ShiTuRec",
-        "config_path": osp.join(
-            PDX_CONFIG_DIR, "PP-ShiTuV2_rec_CLIP_vit_base.yaml"
-        ),
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-ShiTuV2_rec_CLIP_vit_base.yaml"),
         "supported_apis": ["train", "evaluate", "predict", "export"],
         "supported_dataset_types": ["ShiTuRecDataset"],
         "infer_config": None,
@@ -64,9 +60,7 @@ register_model_info(
     {
         "model_name": "PP-ShiTuV2_rec_CLIP_vit_large",
         "suite": "ShiTuRec",
-        "config_path": osp.join(
-            PDX_CONFIG_DIR, "PP-ShiTuV2_rec_CLIP_vit_large.yaml"
-        ),
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-ShiTuV2_rec_CLIP_vit_large.yaml"),
         "supported_apis": ["train", "evaluate", "predict", "export"],
         "supported_dataset_types": ["ShiTuRecDataset"],
         "infer_config": None,

+ 1 - 0
paddlex/repo_apis/PaddleClas_api/shitu_rec/runner.py

@@ -20,6 +20,7 @@ from ...base.utils.subprocess import CompletedProcess
 
 class ShiTuRecRunner(ClsRunner):
     """ShiTuRec Runner"""
+
     pass
 
 

+ 1 - 1
paddlex/repo_apis/PaddleDetection_api/configs/CenterNet-ResNet50.yaml

@@ -127,4 +127,4 @@ export:
   post_process: True  # Whether post-processing is included in the network when export model.
   nms: True           # Whether NMS is included in the network when export model.
   benchmark: False    # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
-  fuse_conv_bn: False
+  fuse_conv_bn: False

+ 1 - 1
paddlex/repo_apis/PaddleDetection_api/configs/FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml

@@ -178,4 +178,4 @@ export:
   post_process: True  # Whether post-processing is included in the network when export model.
   nms: True           # Whether NMS is included in the network when export model.
   benchmark: False    # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
-  fuse_conv_bn: False
+  fuse_conv_bn: False

+ 1 - 1
paddlex/repo_apis/PaddleDetection_api/configs/PicoDet-S_layout_3cls.yaml

@@ -162,4 +162,4 @@ export:
   post_process: true
   nms: true
   benchmark: false
-  fuse_conv_bn: false
+  fuse_conv_bn: false

+ 9 - 5
paddlex/repo_apis/PaddleDetection_api/object_det/config.py

@@ -367,11 +367,15 @@ class DetConfig(BaseConfig, PPDetConfigMixin):
             num_classes (int): the classes number value to set.
         """
         self["num_classes"] = num_classes
-        if 'CenterNet' in self.model_name:
-            for i in range(len(self['TrainReader']['sample_transforms'])):
-                if 'Gt2CenterNetTarget' in self['TrainReader']['sample_transforms'][i].keys():
-                     self['TrainReader']['sample_transforms'][i]['Gt2CenterNetTarget']['num_classes'] = num_classes
-        
+        if "CenterNet" in self.model_name:
+            for i in range(len(self["TrainReader"]["sample_transforms"])):
+                if (
+                    "Gt2CenterNetTarget"
+                    in self["TrainReader"]["sample_transforms"][i].keys()
+                ):
+                    self["TrainReader"]["sample_transforms"][i]["Gt2CenterNetTarget"][
+                        "num_classes"
+                    ] = num_classes
 
     def update_random_size(self, randomsize: list[list[int, int]]):
         """update `target_size` of `BatchRandomResize` op in TestReader

+ 14 - 0
paddlex/repo_apis/PaddleDetection_api/object_det/official_categories.py

@@ -1,3 +1,17 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 official_categories = {
     "PP-YOLOE-L_human": [{"name": "pedestrian", "id": 0}],
     "PP-YOLOE-S_human": [{"name": "pedestrian", "id": 0}],

+ 231 - 227
paddlex/repo_apis/PaddleDetection_api/object_det/register.py

@@ -377,15 +377,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet34-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet34-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet34-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNet34-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -393,15 +393,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet50',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet50.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet50",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNet50.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -409,15 +409,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet50-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet50-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet50-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNet50-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -425,15 +425,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet50-vd-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet50-vd-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet50-vd-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNet50-vd-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -441,15 +441,17 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet50-vd-SSLDv2-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet50-vd-SSLDv2-FPN",
+        "suite": "Det",
+        "config_path": osp.join(
+            PDX_CONFIG_DIR, "FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml"
+        ),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -457,15 +459,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet101',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet101.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet101",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNet101.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -473,15 +475,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNet101-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNet101-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNet101-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNet101-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -489,15 +491,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-ResNeXt101-vd-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-ResNeXt101-vd-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-ResNeXt101-vd-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-ResNeXt101-vd-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -505,15 +507,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FasterRCNN-Swin-Tiny-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FasterRCNN-Swin-Tiny-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FasterRCNN-Swin-Tiny-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FasterRCNN-Swin-Tiny-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -521,15 +523,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'Cascade-FasterRCNN-ResNet50-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'Cascade-FasterRCNN-ResNet50-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "Cascade-FasterRCNN-ResNet50-FPN",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "Cascade-FasterRCNN-ResNet50-FPN.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -537,15 +539,17 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN",
+        "suite": "Det",
+        "config_path": osp.join(
+            PDX_CONFIG_DIR, "Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml"
+        ),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -553,15 +557,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PicoDet-XS',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PicoDet-XS.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PicoDet-XS",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PicoDet-XS.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -569,15 +573,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PicoDet-M',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PicoDet-M.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PicoDet-M",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PicoDet-M.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -585,15 +589,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'FCOS-ResNet50',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'FCOS-ResNet50.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "FCOS-ResNet50",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "FCOS-ResNet50.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -601,31 +605,31 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'DETR-R50',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'DETR-R50.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "DETR-R50",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "DETR-R50.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
 
 
 register_model_info(
-        {
-        'model_name': 'PP-YOLOE-L_vehicle',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE-L_vehicle.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+    {
+        "model_name": "PP-YOLOE-L_vehicle",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE-L_vehicle.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -633,15 +637,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-YOLOE-S_vehicle',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE-S_vehicle.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-YOLOE-S_vehicle",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE-S_vehicle.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -649,15 +653,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-ShiTuV2_det',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-ShiTuV2_det.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-ShiTuV2_det",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-ShiTuV2_det.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -665,15 +669,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-YOLOE-L_human',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE-L_human.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-YOLOE-L_human",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE-L_human.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -681,31 +685,31 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-YOLOE-S_human',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE-S_human.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-YOLOE-S_human",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE-S_human.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
-) 
+)
 
 
 register_model_info(
     {
-        'model_name': 'CenterNet-DLA-34',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'CenterNet-DLA-34.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "CenterNet-DLA-34",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "CenterNet-DLA-34.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -713,15 +717,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'CenterNet-ResNet50',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'CenterNet-ResNet50.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "CenterNet-ResNet50",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "CenterNet-ResNet50.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -729,15 +733,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-YOLOE_plus_SOD-L',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE_plus_SOD-L.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-YOLOE_plus_SOD-L",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE_plus_SOD-L.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -745,15 +749,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-YOLOE_plus_SOD-S',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE_plus_SOD-S.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-YOLOE_plus_SOD-S",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE_plus_SOD-S.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )
@@ -761,15 +765,15 @@ register_model_info(
 
 register_model_info(
     {
-        'model_name': 'PP-YOLOE_plus_SOD-largesize-L',
-        'suite': 'Det',
-        'config_path': osp.join(PDX_CONFIG_DIR, 'PP-YOLOE_plus_SOD-largesize-L.yaml'),
-        'supported_apis': ['train', 'evaluate', 'predict', 'export', 'infer'],
-        'supported_dataset_types': ['COCODetDataset'],
-        'supported_train_opts': {
-            'device': ['cpu', 'gpu_nxcx', 'xpu', 'npu', 'mlu'],
-            'dy2st': False,
-            'amp': ['OFF']
+        "model_name": "PP-YOLOE_plus_SOD-largesize-L",
+        "suite": "Det",
+        "config_path": osp.join(PDX_CONFIG_DIR, "PP-YOLOE_plus_SOD-largesize-L.yaml"),
+        "supported_apis": ["train", "evaluate", "predict", "export", "infer"],
+        "supported_dataset_types": ["COCODetDataset"],
+        "supported_train_opts": {
+            "device": ["cpu", "gpu_nxcx", "xpu", "npu", "mlu"],
+            "dy2st": False,
+            "amp": ["OFF"],
         },
     }
 )

+ 0 - 1
paddlex/repo_apis/PaddleSeg_api/configs/STFPM.yaml

@@ -46,4 +46,3 @@ lr_scheduler:
   learning_rate: 0.4
   end_lr: 0.4
   power: 0.9
-

+ 13 - 0
paddlex/repo_apis/__init__.py

@@ -0,0 +1,13 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

+ 4 - 4
paddlex/repo_manager/meta.py

@@ -31,7 +31,7 @@ REPO_META = {
     "PaddleSeg": {
         "git_path": "/PaddlePaddle/PaddleSeg.git",
         "platform": "github",
-        "branch": "develop",
+        "branch": "release/2.10",
         "pkg_name": "paddleseg",
         "lib_name": "paddleseg",
         "pdx_pkg_name": "PaddleSeg_api",
@@ -42,7 +42,7 @@ REPO_META = {
     "PaddleClas": {
         "git_path": "/PaddlePaddle/PaddleClas.git",
         "platform": "github",
-        "branch": "develop",
+        "branch": "release/2.6",
         "pkg_name": "paddleclas",
         "lib_name": "paddleclas",
         "pdx_pkg_name": "PaddleClas_api",
@@ -54,7 +54,7 @@ REPO_META = {
     "PaddleDetection": {
         "git_path": "/PaddlePaddle/PaddleDetection.git",
         "platform": "github",
-        "branch": "develop",
+        "branch": "release/2.8",
         "pkg_name": "paddledet",
         "lib_name": "ppdet",
         "pdx_pkg_name": "PaddleDetection_api",
@@ -64,7 +64,7 @@ REPO_META = {
     "PaddleOCR": {
         "git_path": "/PaddlePaddle/PaddleOCR.git",
         "platform": "github",
-        "branch": "main",
+        "branch": "release/2.9",
         "pkg_name": "paddleocr",
         "lib_name": "paddleocr",
         "pdx_pkg_name": "PaddleOCR_api",

+ 13 - 0
paddlex/utils/__init__.py

@@ -0,0 +1,13 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.