cuicheng01 1 yıl önce
ebeveyn
işleme
8657c121b1
45 değiştirilmiş dosya ile 263 ekleme ve 246 silme
  1. 48 23
      README.md
  2. 10 10
      docs/tutorials/INSTALL_OTHER_DEVICES.md
  3. 9 9
      docs/tutorials/data/annotation/ClsAnnoTools.md
  4. 23 24
      docs/tutorials/data/annotation/DetAnnoTools.md
  5. 14 16
      docs/tutorials/data/annotation/InstSegAnnoTools.md
  6. 14 15
      docs/tutorials/data/annotation/SegAnnoTools.md
  7. 7 7
      docs/tutorials/data/dataset_check.md
  8. 0 0
      docs/tutorials/models/model_inference_api.md
  9. 0 0
      docs/tutorials/models/model_inference_tools.md
  10. 1 14
      docs/tutorials/models/support_mlu_model_list.md
  11. 1 1
      docs/tutorials/models/support_model_list.md
  12. 20 13
      docs/tutorials/models/support_npu_model_list.md
  13. 4 14
      docs/tutorials/models/support_xpu_model_list.md
  14. 0 0
      docs/tutorials/pipelines/pipeline_inference_api.md
  15. 0 0
      docs/tutorials/pipelines/pipeline_inference_tools.md
  16. 1 0
      docs/tutorials/pipelines/support_pipeline_list.md
  17. 1 1
      paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml
  18. 1 1
      paddlex/configs/ts_anomaly_detection/DLinear_ad.yaml
  19. 1 1
      paddlex/configs/ts_anomaly_detection/Nonstationary_ad.yaml
  20. 1 1
      paddlex/configs/ts_anomaly_detection/PatchTST_ad.yaml
  21. 1 1
      paddlex/configs/ts_anomaly_detection/TimesNet_ad.yaml
  22. 1 1
      paddlex/configs/ts_classification/TimesNet_cls.yaml
  23. 1 1
      paddlex/configs/ts_forecast/DLinear.yaml
  24. 1 1
      paddlex/configs/ts_forecast/NLinear.yaml
  25. 1 1
      paddlex/configs/ts_forecast/Nonstationary.yaml
  26. 1 1
      paddlex/configs/ts_forecast/PatchTST.yaml
  27. 1 1
      paddlex/configs/ts_forecast/RLinear.yaml
  28. 1 1
      paddlex/configs/ts_forecast/TiDE.yaml
  29. 1 1
      paddlex/configs/ts_forecast/TimesNet.yaml
  30. 14 0
      paddlex/modules/base/predictor/io/readers.py
  31. 5 0
      paddlex/modules/base/predictor/kernel_option.py
  32. 5 4
      paddlex/modules/base/predictor/predictor.py
  33. 2 1
      paddlex/modules/base/predictor/utils/paddle_inference_predictor.py
  34. 5 6
      paddlex/modules/image_classification/predictor/transforms.py
  35. 3 3
      paddlex/modules/object_detection/predictor/transforms.py
  36. 6 4
      paddlex/modules/semantic_segmentation/evaluator.py
  37. 4 0
      paddlex/modules/semantic_segmentation/predictor/predictor.py
  38. 2 0
      paddlex/modules/semantic_segmentation/trainer.py
  39. 2 0
      paddlex/modules/table_recognition/predictor/predictor.py
  40. 2 4
      paddlex/modules/table_recognition/predictor/transforms.py
  41. 17 13
      paddlex/modules/text_recognition/predictor/transforms.py
  42. 28 28
      paddlex/modules/ts_forecast/predictor.py
  43. 0 21
      paddlex/utils/config.py
  44. 2 2
      paddlex/utils/device.py
  45. 1 1
      setup.py

+ 48 - 23
README.md

@@ -1,7 +1,8 @@
 <p align="center">
   <img src="https://github.com/PaddlePaddle/PaddleX/assets/45199522/63c6d059-234f-4a27-955e-ac89d81409ee" width="360" height ="55" alt="PaddleX" align="middle" />
 </p>
- <p align= "center"> PaddleX -- 飞桨低代码开发工具,以低代码的形式支持开发者快速实现产业实际项目落地 </p>
+
+<p align= "center"> PaddleX -- 飞桨低代码开发工具,以低代码的形式支持开发者快速实现产业实际项目落地 </p>
 
 <p align="left">
     <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-red.svg"></a>
@@ -10,51 +11,75 @@
     <a href=""><img src="https://img.shields.io/badge/hardware-intel cpu%2C%20gpu%2C%20xpu%2C%20npu%2C%20mlu-yellow.svg"></a>
 </p>
 
+## 简介
+PaddleX3.0 是飞桨精选模型的低代码开发工具,支持国内外多款主流硬件的模型训练和推理,覆盖工业、能源、金融、交通、教育等全行业,助力开发者产业实践落地。
+
+任务示例展示
 
+## 📣 近期更新
+🔥 PaddleX3.0 升级中,6 月正式发布,敬请期待,云端使用请前往飞桨 AI Studio 星河社区:https://aistudio.baidu.com/pipeline/mine ,点击「创建产线」开启使用。
 
-## 近期动态
-🔥 PaddleX3.0 升级中,6月正式发布,敬请期待,云端使用请前往飞桨 AI Studio 星河社区:https://aistudio.baidu.com/pipeline/mine ,点击「创建产线」开启使用。
+## 🌟 特性
 
-## 产品介绍
-PaddleX3.0 是飞桨精选模型的低代码开发工具,支持国内外多款主流硬件的训练和推理,覆盖工业、能源、金融、交通、教育等全行业,沉淀产业实际经验,并提供丰富的案例实践教程,全程助力开发者产业实践落地。
+PaddleX 3.0 集成飞桨生态优势能力,覆盖7大场景任务,构建 16 条模型产线,提供低代码开发模式,助力开发者在不同主流硬件上进行模型全流程开发。
 
-PaddleX3.0 分为本地端和云端,本地端提供统一任务API接口,支持图像分类、目标检测、图像分割、实例分割、OCR、时序相关等任务;云端提供[图形化开发界面](https://aistudio.baidu.com/pipeline/mine),支持开发者使用零代码产线产出高质量模型和部署包。本项目面向本地端,开发者可以基于本项目快速完成模型训练、评估、推理。本项目提供了两种模型开发工具,即**单模型开发工具**和**模型产线开发工具**。
+  - **基础模型产线(模型数量多,场景全):** 精选 72 个飞桨优质模型,覆盖图像分类、目标检测、图像分割、OCR、文本图像版面分析、时序预测等场景任务
+  - **特色模型产线(提效显著):** 提供大小模型结合,大模型半监督学习和多模型融合显著提效方案
+  - **低代码开发模式(便捷开发部署):** 提供零代码和低代码两种开发方式。
+     - 零代码开发通过用户图形界面(GUI)交互式提交后台训练任务,打通在线&离线部署,支持以 API 的形式调用在线服务。
+     - 低代码开发,一套 API 接口实现 16 条模型产线全流程开发,同时支持用户自定义模型串联流程。
+  - **本地端多硬件支持(兼容性强):** 支持英伟达 GPU、昆仑芯、昇腾和寒武纪多硬件上,纯离线使用 
 
+<div align="center">
+    <img src="https://github.com/PaddlePaddle/PaddleX/assets/45199522/61c4738f-735e-4ceb-aa5f-1038d4506d1c">
+</div>
 
-## 安装与快速开始
+## 安装与快速开始
 - [安装](./docs/tutorials/INSTALL.md)
 - 快速开始
-  - [单模型开发工具](./docs/tutorials/tools/model_tools.md)
-  - [模型产线开发工具](./docs/tutorials/tools/pipelines_tools.md)
+  - [单模型开发工具](./docs/tutorials/inference/model_inference_tools.md)
+  - [模型产线开发工具](./docs/tutorials/inference/pipeline_inference_tools.md)
+
+## 🛠️ PaddleX3.0 覆盖的模型和模型产线
+  - [单模型列表](./docs/tutorials/models/support_model_list.md)
+  - [模型产线列表](./docs/tutorials/pipelines/support_pipeline_list.md)
+
+## 📖 零代码开发教程
+- [云端图形化开发界面](https://aistudio.baidu.com/pipeline/mine):支持开发者使用零代码产线产出高质量模型和部署包
+- [教程《零门槛开发产业级 AI 模型》](https://aistudio.baidu.com/practical/introduce/546656605663301):提供产业级模型开发经验,并且用12个实用的产业实践案例,手把手带你零门槛开发产业级AI模型
 
-## 单模型开发工具
-本节介绍 PaddleX3.0 单模型的全流程开发流程,包括数据准备、模型训练/评估、模型推理的使用方法。PaddleX3.0 支持的模型可以参考 [PaddleX模型库](./docs/tutorials/models/support_model_list.md)。
+## 📖 低代码开发教程
 
-### 1. 数据准备
-- [数据准备流程](./docs/data/README.md)
+### 一、单模型开发工具 🚀
+本节介绍 PaddleX3.0 单模型的全流程开发流程,包括数据准备、模型训练/评估、模型推理的使用方法。PaddleX3.0 支持的模型可以参考 [PaddleX 模型库](./docs/tutorials/models/support_model_list.md)。
+
+#### 1. 快速体验
+- [快速体验](./docs/tutorials/models/model_inference_tools.md)
+
+#### 2. 数据准备
+- [数据准备流程](./docs/tutorials/data/README.md)
 - [数据标注](./docs/tutorials/data/annotation/README.md)
 - [数据校验](./docs/tutorials/data/dataset_check.md)
-### 2. 模型训练
+
+#### 3. 模型训练
 - [模型训练/评估](./docs/tutorials/base/README.md)
 - [模型优化](./docs/tutorials/base/model_optimize.md)
 
-### 3. 模型推理
- - [模型推理](docs/tutorials/inference/model_inference_tools.md)
- - [模型推理 API 介绍](docs/tutorials/inference/model_infernce_api.md)
+#### 4. 模型推理
+- [模型推理](./docs/tutorials/base/README.md)
 
-## 模型产线开发工具
- - [模型产线推理](docs/tutorials/inference/pipeline_inference_tools.md)
- - [模型产线推理 API 介绍](docs/tutorials/inference/pipeline_infernce_api.md)
+### 二、模型产线开发工具 🔥
+本节将介绍 PaddleX3.0 模型产线的全流程开发流程,包括数据准备、模型训练/评估、模型推理的使用方法。PaddleX3.0 支持的模型产线可以参考 [PaddleX 模型产线列表](./docs/tutorials/pipelines/support_pipeline_list.md)
 
-## 多硬件支持
-🔥 本项目支持在多种硬件上进行模型的开发,除了 GPU 外,当前支持的硬件还有**昆仑芯**、**昇腾芯**、**寒武纪芯**。只需添加一个配置设备的参数,即可在对应硬件上使用上述工具。详情可以参考上述文档。
+## 🌟 多硬件支持
+本项目支持在多种硬件上进行模型的开发,除了 GPU 外,当前支持的硬件还有**昆仑芯**、**昇腾芯**、**寒武纪芯**。只需添加一个配置设备的参数,即可在对应硬件上使用上述工具。
 
 - 昇腾芯支持的模型列表请参考 [PaddleX 昇腾芯模型列表](./docs/tutorials/models/support_npu_model_list.md)。
 - 昆仑芯支持的模型列表请参考 [PaddleX 昆仑芯模型列表](./docs/tutorials/models/support_xpu_model_list.md)。
 - 寒武纪芯支持的模型列表请参考 [PaddleX 寒武纪芯模型列表](./docs/tutorials/models/support_mlu_model_list.md)。
 
 
-## 贡献代码
+## 👀 贡献代码
 
 我们非常欢迎您为 PaddleX 贡献代码或者提供使用建议。如果您可以修复某个 issue 或者增加一个新功能,欢迎给我们提交 Pull Requests。
 

+ 10 - 10
docs/tutorials/INSTALL_OTHER_DEVICES.md

@@ -6,8 +6,8 @@
 - 1.拉取镜像,此镜像仅为开发环境,镜像中不包含预编译的飞桨安装包,镜像中已经默认安装了昇腾算子库 CANN-8.0.RC1。
 
 ```
-docker pull registry.baidubce.com/device/paddle-npu:cann80RC1-ubuntu20-aarch64-gcc84-py39 # 适用于 ARM 架构
-docker pull registry.baidubce.com/device/paddle-npu:cann80RC1-ubuntu20-x86_64-gcc84-py39 # 适用于 X86 架构
+# 适用于 X86 架构,暂时不提供 Arrch64 架构镜像
+docker pull registry.baidubce.com/device/paddle-npu:cann80RC1-ubuntu20-x86_64-gcc84-py39
 ```
 
 - 2.参考如下命令启动容器,ASCEND_RT_VISIBLE_DEVICES 指定可见的 NPU 卡号
@@ -18,7 +18,7 @@ docker run -it --name paddle-npu-dev -v $(pwd):/work \
     -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
     -v /usr/local/dcmi:/usr/local/dcmi \
     -e ASCEND_RT_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
-    registry.baidubce.com/device/paddle-npu:cann80RC1-ubuntu20-$(uname -m)-gcc84-py39 /bin/bash
+    registry.baidubce.com/device/paddle-npu:cann80RC1-ubuntu20-x86_64-gcc84-py39 /bin/bash
 ```
 ### 1.2 安装paddle包
 当前提供 Python3.9 的 wheel 安装包。如有其他 Python 版本需求,可以参考[飞桨官方文档](https://www.paddlepaddle.org.cn/install/quick)自行编译安装。
@@ -27,8 +27,8 @@ docker run -it --name paddle-npu-dev -v $(pwd):/work \
 
 ```
 # 注意需要先安装飞桨CPU版本
-pip install 
-pip install
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddle-device/npu/paddlepaddle-0.0.0-cp39-cp39-linux_x86_64.whl
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddle-device/npu/paddle_custom_npu-0.0.0-cp39-cp39-linux_x86_64.whl
 ```
 - 2.验证安装包
 安装完成之后,运行如下命令。
@@ -65,8 +65,8 @@ docker run -it --name paddle-mlu-dev -v $(pwd):/work \
 - 1.下载安装 Python3.10 的wheel 安装包。
 ```
 # 注意需要先安装飞桨 CPU 版本
-python -m pip install paddlepaddle -i https://www.paddlepaddle.org.cn/packages/nightly/cpu/
-python -m pip install --pre paddle-custom-mlu -i https://www.paddlepaddle.org.cn/packages/nightly/mlu/
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddle-device/mlu/paddlepaddle-3.0.0.dev20240621-cp310-cp310-linux_x86_64.whl
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddle-device/mlu/paddle_custom_mlu-3.0.0.dev20240621-cp310-cp310-linux_x86_64.whl
 ```
 - 2.验证安装包
 安装完成之后,运行如下命令。
@@ -102,8 +102,8 @@ docker run -it --name=xxx -m 81920M --memory-swap=81920M \
 
 - 1.安装 Python3.10 的 wheel 安装包
 ```
-pip install https://paddle-wheel.bj.bcebos.com/2.6.1/xpu/paddlepaddle_xpu-2.6.1-cp310-cp310-linux_x86_64.whl # X86 架构
-pip install https://paddle-device.bj.bcebos.com/2.6.1/xpu/paddlepaddle_xpu-2.6.1-cp310-cp310-linux_aarch64.whl # ARM 架构
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddle-device/xpu/paddlepaddle_xpu-2.6.1-cp310-cp310-linux_x86_64.whl # X86 架构
+pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddle-device/xpu/paddlepaddle_xpu-2.6.1-cp310-cp310-linux_aarch64.whl # ARM 架构
 ```
 - 2.验证安装包
 安装完成之后,运行如下命令
@@ -113,4 +113,4 @@ python -c "import paddle; paddle.utils.run_check()"
 预期得到如下输出结果
 ```
 PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.
-```
+```

+ 9 - 9
docs/tutorials/data/annotation/ClsAnnoTools.md

@@ -20,14 +20,14 @@ pip install labelme
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/image_dir.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/3e333d6b-cbab-4161-b7df-9fb65a0576c7' width='600px'>
 </center>
 
 3. 在 pets 文件夹中创建待标注数据集的类别标签文件 flags.txt,并在 flags.txt 中按行写入待标注数据集的类别。以猫狗分类数据集的 flags.txt 为例,如下图所示:
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/label_txt.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/6d29c675-facf-4932-a5c6-52e7a7428181' width='600px'>
 </center>
 
 ### 2.2 启动 Labelme
@@ -46,35 +46,35 @@ labelme images --nodata --autosave --output annotations --flags flags.txt
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/labelme.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/5965e351-8f53-4ca2-85eb-bf1f53d1c50b' width='600px'>
 </center>
 
 2. 在 Flags 界面选择类别。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/flags.png' width='300px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/45889bd0-abb6-46ca-aa35-f4e124ad8481' width='300px'>
 </center>
 
 3. 标注好后点击存储。(若在启动 labelme 时未指定 --output 字段,会在第一次存储时提示选择存储路径,若指定 --autosave 字段使用自动保存,则无需点击存储按钮)。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/save.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/8a3f3e54-68a9-4f9a-8c68-63272fb2e0b6' width='100px'>
 </center>
 
 4. 然后点击 "Next Image" 进行下一张图片的标注。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/next_image.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d9be34e1-d44c-4738-8101-3895c70a8b6e' width='100px'>
 </center>
 
 5. 最终标注好的标签文件如图所示。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/annotation_result.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/30432aae-7b5a-4539-ae09-fa476144ef6b' width='600px'>
 </center>
 
 6. 使用 [convert_to_imagenet.py](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/convert_to_imagenet.py) 脚本将标注好的数据集转换为 ImageNet-1k 数据集格式,生成 train.txt,val.txt 和label.txt。
@@ -88,7 +88,7 @@ python convert_to_imagenet.py --dataset_path /path/to/dataset
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/image_classification_dataset_prepare/directory_structure.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/23074d47-d2af-44fc-9377-b38cd7823f32' width='600px'>
 </center>
 
-8. 将 pets 目录打包压缩为 .tar 或 .zip 格式压缩包即可得到猫狗图像分类标准 labelme 格式数据集,然后上传至 [通用图像分类产线](https://aistudio.baidu.com/pipeline/mine) 经过数据化分后即可进行训练。
+8. 将 pets 目录打包压缩为 .tar 或 .zip 格式压缩包即可得到猫狗图像分类标准 labelme 格式数据集,然后上传至 [通用图像分类产线](https://aistudio.baidu.com/pipeline/mine) 经过数据化分后即可进行训练。

+ 23 - 24
docs/tutorials/data/annotation/DetAnnoTools.md

@@ -7,9 +7,9 @@
 图片示例:
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/example1.png' width='255px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/example2.png' width='227px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/example3.png' width='118px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/947e5e80-4857-46de-b750-88442128d3e8' width='255px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d1474723-4f38-4b65-b93f-c99b9adcdb15' width='227px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/f31574a1-c94a-4692-9dc9-ad3793bb5e62' width='118px'>
 <br>
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/example4.png' width='231px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/example5.png' width='197px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/example6.png' width='173px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/631e64de-7c66-43d4-83d0-728098a61a7e' width='231px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/c464aa08-23d8-40aa-92f8-450f3039bae8' width='197px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/fae00720-f8e8-41d8-bb33-d4c079312e5c' width='173px'>
 </center>
 
 ## 2. Labelme标注工具使用
@@ -30,14 +30,14 @@ pip install labelme
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/image_dir.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/82730db4-a19f-4e08-8089-f398d230d266' width='600px'>
 </center>
 
 3. 在hemlet文件夹中创建待标注数据集的类别标签文件label.txt,并在label.txt中按行写入待标注数据集的类别。以安全帽检测数据集的label.txt为例,如下图所示:
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/label_txt.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/6cefbd00-0c4b-4111-bb58-ac5245f79127' width='600px'>
 </center>
 
 #### 2.3.2. 启动Labelme
@@ -56,56 +56,56 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/labelme.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/5c21dd2b-0159-431a-b8b2-d8874e29c8d8' width='600px'>
 </center>
 
 2. 点击"编辑"选择标注类型
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/edit.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/f6a053fc-c9e9-4ebe-89a9-0c8f53248188' width='600px'>
 </center>
 
 3. 选择创建矩形框
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/rectangle.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/03173d0e-1c12-4ebf-8ee6-a4b637b6eaae' width='200px'>
 </center>
 
 4. 在图片上拖动十字框选目标区域
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/select_target_area.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/6b518a79-ead5-4484-94f1-1a9f5f7842de' width='600px'>
 </center>
 
 5. 再次点击选择目标框类别
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/select_category.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/c90d05b7-4e65-4c41-9a68-2cd10082f4da' width='200px'>
 </center>
 
 6. 标注好后点击存储。(若在启动labelme时未指定--output字段,会在第一次存储时提示选择存储路径,若指定--autosave字段使用自动保存,则无需点击存储按钮)
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/save.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/8a3f3e54-68a9-4f9a-8c68-63272fb2e0b6' width='100px'>
 </center>
 
 7. 然后点击"Next Image"进行下一张图片的标注
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/next_image.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d9be34e1-d44c-4738-8101-3895c70a8b6e' width='100px'>
 </center>
 
 8. 最终标注好的标签文件如图所示
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/annotation_result.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/f172eb8e-5800-4f78-ad21-c8fcfc2d3489' width='600px'>
 </center>
 
 9. 调整目录得到安全帽检测标准labelme格式数据集
@@ -113,14 +113,14 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/anno_list.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/77eaf28a-4d4c-4a02-962a-5b25c7b04b99' width='600px'>
 </center>
 
   b. 经过整理得到的最终目录结构如下:
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/directory_structure.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/b970ce62-fbb9-4cea-b5c0-b2e9565a02f9' width='600px'>
 </center>
 
   c. 将hemlet目录打包压缩为.tar或.zip格式压缩包即可得到安全帽检测标准labelme格式数据集
@@ -134,10 +134,10 @@ conda activate paddlelabel
 同样可以通过pip一键安装
 ```shell
 pip install --upgrade paddlelabel
-pip install a2wsgi uvicorn==0.18.1 
+pip install a2wsgi uvicorn==0.18.1
 pip install connexion==2.14.1
 pip install Flask==2.2.2
-pip install Werkzeug==2.2.2 
+pip install Werkzeug==2.2.2
 ```
 安装成功后,可以在终端使用如下指令启动 PaddleLabel
 paddlelabel  # 启动paddlelabel
@@ -148,14 +148,14 @@ PaddleLabel 启动后会自动在浏览器中打开网页,接下来可以根
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/welcome.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/275afab8-56d1-4d33-9616-696060dffdf1' width='600px'>
 </center>
 
 2. 填写项目名称,数据集路径,注意路径是本地机器上的 绝对路径。完成后点击创建。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/create_project.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/36371a57-e0a7-4307-aea8-840968a50b41' width='600px'>
 </center>
 
 3. 首先定义需要标注的类别,以版面分析为例,提供10个类别, 每个类别有唯一对应的id
@@ -174,7 +174,7 @@ PaddleLabel 启动后会自动在浏览器中打开网页,接下来可以根
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/overview.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/9e89f9cd-3339-4905-83ab-d87c72910821' width='600px'>
 </center>
 
 5. 导出标注文件
@@ -182,21 +182,21 @@ PaddleLabel 启动后会自动在浏览器中打开网页,接下来可以根
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/export_dataset.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/3a1cbd60-93d4-4590-9165-682fb90ffb82' width='600px'>
 </center>
 
   b. 填写导出路径和导出格式,导出路径依然是一个绝对路径,导出格式请选择coco
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/select_format.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/90acbd86-e265-4bc1-8d43-a1db041127b8' width='600px'>
 </center>
 
   c. 导出成功后,在指定的路径下就可以获得标注文件。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/labeled_file.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/38216b5c-e601-4281-8442-f8915d1114c3' width='600px'>
 </center>
 
 6. 调整目录得到安全帽检测标准coco格式数据集
@@ -215,8 +215,7 @@ PaddleLabel 启动后会自动在浏览器中打开网页,接下来可以根
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/object_detection_dataset_prepare/directory_structure2.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/1173c8e5-079c-4281-960d-dc740c6a8920' width='600px'>
 </center>
 
   c. 将hemlet目录打包压缩为.tar或.zip格式压缩包即可得到安全帽检测标准coco格式数据集
-

+ 14 - 16
docs/tutorials/data/annotation/InstSegAnnoTools.md

@@ -7,9 +7,9 @@
 图片示例:
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/example1.png' width='300px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/example2.png' width='300px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/df1ce3b2-39f4-4101-92e8-b45d670df90d' width='300px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d61d1cf9-ed8d-454f-816d-70b0446c8d3a' width='300px'>
 <br>
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/example3.png' width='300px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/example4.png' width='300px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/95dded9f-a721-4a35-adfa-f665330d0a31' width='300px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/a9841891-61d6-480d-8b2e-303d2409b4ae' width='300px'>
 </center>
 
 ## 2. Labelme 标注工具使用
@@ -30,14 +30,14 @@ pip install labelme
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/image_dir.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/be007096-4d23-4f9b-9390-cab44afd2f23' width='600px'>
 </center>
 
 3. 在 fruit 文件夹中创建待标注数据集的类别标签文件 label.txt,并在 label.txt 中按行写入待标注数据集的类别。以水果实例分割数据集的 label.txt 为例,如下图所示:
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/label_txt.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/9fcae431-6538-4bdd-a53a-f6e68a5b68b5' width='600px'>
 </center>
 
 #### 2.3.2. 启动 Labelme
@@ -56,49 +56,49 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/labelme.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/58976163-c6e2-4cb6-8bd9-dd248740baa7' width='600px'>
 </center>
 
 2. 点击 "Edit" 选择标注类型,选则 "Create Polygons"。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/edit.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/2a7bae57-cc02-495a-8470-ae6436743724' width='200px'>
 </center>
 
 3. 在图片上创建多边形描绘分割区域边界。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/select_target_area.png' width='300px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/1256b3df-4ffa-4e40-9750-90893c744af2' width='300px'>
 </center>
 
 4. 再次点击选择分割区域类别。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/select_category.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/4220612c-fa8f-4c64-b41a-f9f5a53bff27' width='200px'>
 </center>
 
 5. 标注好后点击存储。(若在启动 labelme 时未指定 --output 字段,会在第一次存储时提示选择存储路径,若指定 --autosave 字段使用自动保存,则无需点击存储按钮)。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/save.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/8a3f3e54-68a9-4f9a-8c68-63272fb2e0b6' width='100px'>
 </center>
 
 6. 然后点击 "Next Image" 进行下一张图片的标注。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/next_image.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d9be34e1-d44c-4738-8101-3895c70a8b6e' width='100px'>
 </center>
 
 7. 最终标注好的标签文件如图所示。
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/annotation_result.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/afeb3c58-fc0b-4fea-ae86-8e30cb295369' width='600px'>
 </center>
 
 8. 调整目录得到水果实例分割标准 labelme 格式数据集。
@@ -108,16 +108,14 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/anno_list1.png' width='600px'>
-<br>
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/anno_list1.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/beb79851-b568-4253-a551-fb67b15839ce' width='600px'>
 </center>
 
 9. 经过整理得到的最终目录结构如下:
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/instance_segmentation_dataset_prepare/directory_structure.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/7f18c043-fd7b-4b2e-b60b-325656131995' width='600px'>
 </center>
 
-10. 将 fruit 目录打包压缩为 .tar 或 .zip 格式压缩包即可得到水果实例分割标准 labelme 格式数据集。
+10. 将 fruit 目录打包压缩为 .tar 或 .zip 格式压缩包即可得到水果实例分割标准 labelme 格式数据集。

+ 14 - 15
docs/tutorials/data/annotation/SegAnnoTools.md

@@ -7,9 +7,9 @@
 图片示例:
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/example1.png' width='200px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/example2.png' width='200px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/example3.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/7c1c0a0c-00a9-4a70-8982-bf1387c1a79a' width='200px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/4573e6bc-2bb0-4378-aef1-aebbd7a9e1b5' width='200px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/71a71f5d-cdcd-475f-8707-843e65a77a32' width='200px'>
 <br>
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/example4.png' width='200px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/example5.png' width='200px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/example6.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d5318da8-f2ee-4c0d-b991-40ba984d992e' width='200px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/c2ed688c-7127-429d-9037-a2f01734fec7' width='200px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/ebe5e90a-4170-4ae7-b18f-3bd7e9cf702f' width='200px'>
 </center>
 
 ## 2. Labelme标注工具使用
@@ -30,7 +30,7 @@ pip install labelme
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/image_dir.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/cd483b25-5453-4364-8724-8bba2544a230' width='600px'>
 </center>
 
 #### 2.3.2. 启动Labelme
@@ -53,35 +53,35 @@ labelme images --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/labelme.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/f7134296-538d-404a-bf75-d5907f32da6d' width='600px'>
 </center>
 
 2. 点击"编辑"选择标注类型
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/edit.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/629d712f-9b59-4330-9def-53272ad45f56' width='600px'>
 </center>
 
 3. 选择创建多边形
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/polygons.png' width='200px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/b33ee453-0628-4d21-81bd-dbfb2636bc9a' width='200px'>
 </center>
 
 4. 在图片上绘制目标轮廓
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/select_target_area.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/76be62db-287d-4076-9f6f-09b6cf68da51' width='600px'>
 </center>
 
 5. 出现如下左图所示轮廓线闭合时,弹出类别选择框,可输入或选择目标类别
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/finish_select.png' width='380px'><img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/select_category.png' width='220px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/db2ad9d4-7cda-41dc-b4bb-8edf480288ec' width='380px'><img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/54fdd56b-9615-4129-bbaa-775191a7267e' width='220px'>
 </center>
 
 通常情况下,只需要标注前景目标并设置标注类别即可,其他像素默认作为背景。如需要手动标注背景区域,**类别必须设置为 \_background\_**,否则格式转换数据集会出现错误。
@@ -90,14 +90,14 @@ labelme images --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/background.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/03d630e2-880b-450a-9adc-d76799695d49' width='600px'>
 </center>
 
 6. 标注好后点击存储。(若在启动 labelme 时未指定--output 字段,会在第一次存储时提示选择存储路径,若指定--autosave 字段使用自动保存,则无需点击存储按钮)
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/save.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/8a3f3e54-68a9-4f9a-8c68-63272fb2e0b6' width='100px'>
 </center>
 
 
@@ -105,14 +105,14 @@ labelme images --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/next_image.png' width='100px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/d9be34e1-d44c-4738-8101-3895c70a8b6e' width='100px'>
 </center>
 
 8. 最终标注好的标签文件如图所示
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/annotation_result.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/6645863d-45c1-4709-8b1f-abedf33440b6' width='600px'>
 </center>
 
 9. 调整目录得到安全帽检测标准labelme格式数据集
@@ -126,7 +126,7 @@ labelme images --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/anno_list.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/71de5ff6-25cb-4034-9197-d452c6c0806e' width='600px'>
 </center>
 
   &emsp;&emsp;
@@ -134,9 +134,8 @@ labelme images --nodata --autosave --output annotations
 
 <center>
 
-<img src='https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/applications/semantic_segmentation_dataset_prepare/directory_structure.png' width='600px'>
+<img src='https://github.com/PaddlePaddle/PaddleX/assets/142379845/0f40b46a-07ca-403d-9693-6dcf920de392' width='600px'>
 </center>
 
   &emsp;&emsp;
   c. 将 seg_dataset 目录打包压缩为 .tar 或 .zip 格式压缩包即可得到语义分割标准 labelme 格式数据集
-

+ 7 - 7
docs/tutorials/data/dataset_check.md

@@ -60,7 +60,7 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
 - attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/quick_start/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/e2cada1f-337f-4062-8504-077c90a3b8da)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 
@@ -140,7 +140,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 - attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/tutorials/data/dataset_check/object_detection/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/d8a1fc2f-3e92-43d2-a75c-d13a13f3be05)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 
@@ -219,7 +219,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 - attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/tutorials/data/dataset_check/semantic_segmentation/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/2ba78919-4d86-40c7-b3f1-5a850d0a957d)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 
@@ -298,7 +298,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 - attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/tutorials/data/dataset_check/instance_segmentation/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/736f55cd-2102-4caf-8592-3a0d0fccf1f8)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 
@@ -375,7 +375,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 - attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/tutorials/data/dataset_check/text_detection/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/d47d8410-f8ac-4126-9565-c217528951e0)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 
@@ -452,7 +452,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 - attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/tutorials/data/dataset_check/text_recognition/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/2517ab81-e90f-4384-97f5-6f61785b161f)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 
@@ -862,7 +862,7 @@ python main.py -c paddlex/configs/ts_classify_examples/DLinear_ad.yaml \
 
 
 另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
-![样本分布直方图](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/open_source/tutorials/data/dataset_check/ts_classify_examples/histogram.png)
+![样本分布直方图](https://github.com/PaddlePaddle/PaddleX/assets/142379845/2b2d61d6-7d9b-427c-9248-5e45453d443d)
 
 **注**:只有通过数据校验的数据才可以训练和评估。
 

+ 0 - 0
docs/tutorials/inference/model_inference_api.md → docs/tutorials/models/model_inference_api.md


+ 0 - 0
docs/tutorials/inference/model_inference_tools.md → docs/tutorials/models/model_inference_tools.md


+ 1 - 14
docs/tutorials/models/support_mlu_model_list.md

@@ -38,20 +38,7 @@
 | :--- | :---: |
 | PP-HGNet_small | [PP-HGNet_small.yaml](../../../paddlex/configs/image_classification/PP-HGNet_small.yaml)|
 
-## 二、目标检测
-### 1. PP-YOLOE_plus系列
-| 模型名称 | config |
-| :--- | :---: |
-| PP-YOLOE_plus-S | [PP-YOLOE_plus-S.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml)|
-| PP-YOLOE_plus-M | [PP-YOLOE_plus-M.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-M.yaml)|
-| PP-YOLOE_plus-L | [PP-YOLOE_plus-L.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-L.yaml)|
-| PP-YOLOE_plus-X | [PP-YOLOE_plus-X.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-X.yaml)|
-### 2. PicoDet系列
-| 模型名称 | config |
-| :--- | :---: |
-| PicoDet-S | [PicoDet-S.yaml](../../../paddlex/configs/object_detection/PicoDet-S.yaml)|
-
-## 三、时序预测
+## 二、时序预测
 | 模型名称 | config |
 | :--- | :---: |
 | DLinear | [DLinear.yaml](../../../paddlex/configs/ts_forecast/DLinear.yaml)|

+ 1 - 1
docs/tutorials/models/support_model_list.md

@@ -1,4 +1,4 @@
-# PaddleX模型列表
+# PaddleX 模型列表
 
 ## 一、图像分类
 ### 1. ResNet系列

+ 20 - 13
docs/tutorials/models/support_npu_model_list.md

@@ -12,21 +12,33 @@
 ### 2.PP-LCNet系列
 | 模型名称 | config |
 | :--- | :---: |
+| PP-LCNet_x0_25 | [PP-LCNet_x0_25.yaml](../../../paddlex/configs/image_classification/PP-LCNet_x0_25.yaml)|
+| PP-LCNet_x0_35 | [PP-LCNet_x0_35.yaml](../../../paddlex/configs/image_classification/PP-LCNet_x0_35.yaml)|
+| PP-LCNet_x0_5 | [PP-LCNet_x0_5.yaml](../../../paddlex/configs/image_classification/ResNet50.yaml)|
+| PP-LCNet_x0_75 | [PP-LCNet_x0_5.yaml](../../../paddlex/configs/image_classification/ResNet101.yaml)|
 | PP-LCNet_x1_0 | [PP-LCNet_x1_0.yaml](../../../paddlex/configs/image_classification/PP-LCNet_x1_0.yaml)|
-
+| PP-LCNet_x1_5 | [PP-LCNet_x1_5.yaml](../../../paddlex/configs/image_classification/PP-LCNet_x1_5.yaml)|
+| PP-LCNet_x2_0 | [PP-LCNet_x2_0.yaml](../../../paddlex/configs/image_classification/PP-LCNet_x2_0.yaml)|
+| PP-LCNet_x2_5 | [PP-LCNet_x2_5.yaml](../../../paddlex/configs/image_classification/PP-LCNet_x2_5.yaml)|
 ### 3.MobileNetV2系列
 | 模型名称 | config |
 | :--- | :---: |
 | MobileNetV2_x0_25 | [MobileNetV2_x0_25.yaml](../../../paddlex/configs/image_classification/MobileNetV2_x0_25.yaml)|
 | MobileNetV2_x0_5 | [MobileNetV2_x0_5.yaml](../../../paddlex/configs/image_classification/MobileNetV2_x0_5.yaml)|
 | MobileNetV2_x1_0 | [MobileNetV2_x1_0.yaml](../../../paddlex/configs/image_classification/MobileNetV2_x1_0.yaml)|
-
 ### 4.MobileNetV3系列
 | 模型名称 | config |
 | :--- | :---: |
+| MobileNetV3_small_x0_35 | [MobileNetV3_small_x0_35.yaml](../../../paddlex/configs/image_classification/MobileNetV3_small_x0_35.yaml)|
+| MobileNetV3_small_x0_5 | [MobileNetV3_small_x0_5.yaml](../../../paddlex/configs/image_classification/MobileNetV3_small_x0_5.yaml)|
+| MobileNetV3_small_x0_75 | [MobileNetV3_small_x0_75.yaml](../../../paddlex/configs/image_classification/MobileNetV3_small_x0_75.yaml)|
 | MobileNetV3_small_x1_0 | [MobileNetV3_small_x1_0.yaml](../../../paddlex/configs/image_classification/MobileNetV3_small_x1_0.yaml)|
+| MobileNetV3_small_x1_25 | [MobileNetV3_small_x1_25.yaml](../../../paddlex/configs/image_classification/MobileNetV3_small_x1_25.yaml)|
+| MobileNetV3_large_x0_35 | [MobileNetV3_large_x0_35.yaml](../../../paddlex/configs/image_classification/MobileNetV3_large_x0_35.yaml)|
+| MobileNetV3_large_x0_5 | [MobileNetV3_large_x0_5.yaml](../../../paddlex/configs/image_classification/MobileNetV3_large_x0_5.yaml)|
+| MobileNetV3_large_x0_75 | [MobileNetV3_large_x0_75.yaml](../../../paddlex/configs/image_classification/MobileNetV3_large_x0_75.yaml)|
 | MobileNetV3_large_x1_0 | [MobileNetV3_large_x1_0.yaml](../../../paddlex/configs/image_classification/MobileNetV3_large_x1_0.yaml)|
-
+| MobileNetV3_large_x1_25 | [MobileNetV3_large_x1_25.yaml](../../../paddlex/configs/image_classification/MobileNetV3_large_x1_25.yaml)|
 ### 5.PP-HGNet系列
 | 模型名称 | config |
 | :--- | :---: |
@@ -36,16 +48,16 @@
 | :--- | :---: |
 | PP-HGNetV2-B0 | [PP-HGNetV2-B0.yaml](../../../paddlex/configs/image_classification/PP-HGNetV2-B0.yaml)|
 | PP-HGNetV2-B4 | [PP-HGNetV2-B4.yaml](../../../paddlex/configs/image_classification/PP-HGNetV2-B4.yaml)|
-
+| PP-HGNetV2-B6 | [PP-HGNetV2-B6.yaml](../../../paddlex/configs/image_classification/PP-HGNetV2-B6.yaml)|
 ### 7.SwinTransformer系列
 | 模型名称 | config |
 | :--- | :---: |
 | SwinTransformer_base_patch4_window7_224 | [SwinTransformer_base_patch4_window7_224.yaml](../../../paddlex/configs/image_classification/SwinTransformer_base_patch4_window7_224.yaml)|
-
 ### 8.ConvNeXt系列
 | 模型名称 | config |
 | :--- | :---: |
 | ConvNeXt_tiny | [ConvNeXt_tiny.yaml](../../../paddlex/configs/image_classification/ConvNeXt_tiny.yaml)|
+
 ## 二、目标检测
 ### 1. PP-YOLOE_plus系列
 | 模型名称 | config |
@@ -67,6 +79,7 @@
 | :--- | :---: |
 | PicoDet-S | [PicoDet-S.yaml](../../../paddlex/configs/object_detection/PicoDet-S.yaml)|
 | PicoDet-L | [PicoDet-L.yaml](../../../paddlex/configs/object_detection/PicoDet-L.yaml)|
+
 ## 三、语义分割
 ### 1.Deeplabv3系列
 | 模型名称 | config |
@@ -79,15 +92,9 @@
 | 模型名称 | config |
 | :--- | :---: |
 | PP-LiteSeg-T | [PP-LiteSeg-T.yaml](../../../paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml)|
-## 四、文本检测
-### 1.PP-OCRv4系列
-| 模型名称 | config |
-| :--- | :---: |
-| PP-OCRv4_server_det | [PP-OCRv4_server_det.yaml](../../../paddlex/configs/text_detection/PP-OCRv4_server_det.yaml)|
-| PP-OCRv4_mobile_det | [PP-OCRv4_mobile_det.yaml](../../../paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml)|
-## 五、时序预测
+
+## 四、时序预测
 | 模型名称 | config |
 | :--- | :---: |
 | DLinear | [DLinear.yaml](../../../paddlex/configs/ts_forecast/DLinear.yaml)|
-| RLinear | [RLinear.yaml](../../../paddlex/configs/ts_forecast/RLinear.yaml)|
 | NLinear | [NLinear.yaml](../../../paddlex/configs/ts_forecast/NLinear.yaml)|

+ 4 - 14
docs/tutorials/models/support_xpu_model_list.md

@@ -24,23 +24,13 @@
 | 模型名称 | config |
 | :--- | :---: |
 | PP-HGNet_small | [PP-HGNet_small.yaml](../../../paddlex/configs/image_classification/PP-HGNet_small.yaml)|
-## 二、目标检测
-### 1. PP-YOLOE_plus系列
-| 模型名称 | config |
-| :--- | :---: |
-| PP-YOLOE_plus-S | [PP-YOLOE_plus-S.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml)|
-| PP-YOLOE_plus-M | [PP-YOLOE_plus-M.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-M.yaml)|
-| PP-YOLOE_plus-L | [PP-YOLOE_plus-L.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-L.yaml)|
-| PP-YOLOE_plus-X | [PP-YOLOE_plus-X.yaml](../../../paddlex/configs/object_detection/PP-YOLOE_plus-X.yaml)|
-### 2. PicoDet系列
-| 模型名称 | config |
-| :--- | :---: |
-| PicoDet-S | [PicoDet-S.yaml](../../../paddlex/configs/object_detection/PicoDet-S.yaml)|
-## 三、版面分析
+
+## 二、版面分析
 | 模型名称 | config |
 | :--- | :---: |
 | PicoDet_layout_1x | [PicoDet_layout_1x.yaml](../../../paddlex/configs/structure_analysis/PicoDet_layout_1x.yaml)|
-## 四、时序预测
+
+## 三、时序预测
 | 模型名称 | config |
 | :--- | :---: |
 | DLinear | [DLinear.yaml](../../../paddlex/configs/ts_forecast/DLinear.yaml)|

+ 0 - 0
docs/tutorials/inference/pipeline_inference_api.md → docs/tutorials/pipelines/pipeline_inference_api.md


+ 0 - 0
docs/tutorials/inference/pipeline_inference_tools.md → docs/tutorials/pipelines/pipeline_inference_tools.md


+ 1 - 0
docs/tutorials/pipelines/support_pipeline_list.md

@@ -0,0 +1 @@
+# PaddleX 模型产线列表

+ 1 - 1
paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml

@@ -28,5 +28,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_ad/ts_anomaly_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_anomaly_detection/DLinear_ad.yaml

@@ -28,5 +28,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_ad/ts_anomaly_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_anomaly_detection/Nonstationary_ad.yaml

@@ -28,5 +28,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_ad/ts_anomaly_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_anomaly_detection/PatchTST_ad.yaml

@@ -28,5 +28,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_ad/ts_anomaly_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_anomaly_detection/TimesNet_ad.yaml

@@ -28,5 +28,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_ad/ts_anomaly_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_classification/TimesNet_cls.yaml

@@ -28,5 +28,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_cls/ts_classify_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/DLinear.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/NLinear.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/Nonstationary.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/PatchTST.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/RLinear.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/TiDE.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 1 - 1
paddlex/configs/ts_forecast/TimesNet.yaml

@@ -29,5 +29,5 @@ Evaluate:
   weight_path: "output/best_accuracy.pdparams.tar"
 
 Predict:
-  model_dir: "output/best_model/model.pdparams"
+  model_dir: "output/best_accuracy.pdparams.tar"
   input_path: "/paddle/dataset/paddlex/ts_fc/ts_dataset_examples/test.csv"

+ 14 - 0
paddlex/modules/base/predictor/io/readers.py

@@ -17,6 +17,7 @@
 import enum
 import itertools
 import cv2
+from PIL import Image, ImageOps
 
 __all__ = ['ImageReader', 'VideoReader', 'ReaderType']
 
@@ -77,6 +78,8 @@ class ImageReader(_BaseReader):
         """ init backend """
         if bk_type == 'opencv':
             return OpenCVImageReaderBackend(**bk_args)
+        elif bk_type == 'pil':
+            return PILImageReaderBackend(**bk_args)
         else:
             raise ValueError("Unsupported backend type")
 
@@ -155,6 +158,17 @@ class OpenCVImageReaderBackend(_ImageReaderBackend):
         return cv2.imread(in_path, flags=self.flags)
 
 
+class PILImageReaderBackend(_ImageReaderBackend):
+    """ PILImageReaderBackend """
+
+    def __init__(self):
+        super().__init__()
+
+    def read_file(self, in_path):
+        """ read image file from path by PIL """
+        return ImageOps.exif_transpose(Image.open(in_path))
+
+
 class _VideoReaderBackend(_BaseReaderBackend):
     """ _VideoReaderBackend """
 

+ 5 - 0
paddlex/modules/base/predictor/kernel_option.py

@@ -158,6 +158,11 @@ class PaddleInferenceOption(object):
         """
         return self.SUPPORT_DEVICE
 
+    def get_device(self):
+        """get device
+        """
+        return f"{self._cfg['device']}:{self._cfg['device_id']}"
+
     def __str__(self):
         return ",  ".join([f"{k}: {v}" for k, v in self._cfg.items()])
 

+ 5 - 4
paddlex/modules/base/predictor/predictor.py

@@ -36,12 +36,14 @@ class BasePredictor(ABC, FromDictMixin, Node):
     MODEL_FILE_TAG = 'inference'
 
     def __init__(self,
+                 model_name,
                  model_dir,
                  kernel_option,
                  output,
                  pre_transforms=None,
                  post_transforms=None):
         super().__init__()
+        self.model_name = model_name
         self.model_dir = model_dir
         self.kernel_option = kernel_option
         self.output = output
@@ -169,6 +171,7 @@ class PredictorBuilderByConfig(object):
         self.input_path = predict_config.pop('input_path')
 
         self.predictor = BasePredictor.get(model_name)(
+            model_name=model_name,
             model_dir=model_dir,
             kernel_option=kernel_option,
             output=config.Global.output,
@@ -201,10 +204,8 @@ def create_model(model_name,
     if model_dir is None:
         if model_name in official_models:
             model_dir = official_models[model_name]
-        else:
-            # model name is invalid
-            BasePredictor.get(model_name)
-    return BasePredictor.get(model_name)(model_dir=model_dir,
+    return BasePredictor.get(model_name)(model_name=model_name,
+                                         model_dir=model_dir,
                                          kernel_option=kernel_option,
                                          output=output,
                                          pre_transforms=pre_transforms,

+ 2 - 1
paddlex/modules/base/predictor/utils/paddle_inference_predictor.py

@@ -47,11 +47,12 @@ self._create(param_path, model_path, option, delete_pass=delete_pass)
             os.environ["FLAGS_npu_jit_compile"] = "0"
             os.environ["FLAGS_use_stride_kernel"] = "0"
             os.environ["FLAGS_allocator_strategy"] = "auto_growth"
+            os.environ[
+                "CUSTOM_DEVICE_BLACK_LIST"] = "pad3d,pad3d_grad,set_value,set_value_with_tensor"
         elif option.device == 'xpu':
             config.enable_custom_device('npu')
         elif option.device == 'mlu':
             config.enable_custom_device('mlu')
-            os.environ["CUSTOM_DEVICE_BLACK_LIST"] = "set_value"
         else:
             assert option.device == 'cpu'
             config.disable_gpu()

+ 5 - 6
paddlex/modules/image_classification/predictor/transforms.py

@@ -15,15 +15,14 @@
 
 import os
 import json
-from PIL import Image, ImageDraw, ImageFont
 from pathlib import Path
-
 import numpy as np
+from PIL import ImageDraw, ImageFont
 
-from ....utils.fonts import PINGFANG_FONT_FILE_PATH
-from ...base import BaseTransform
-from ...base.predictor.io.writers import ImageWriter
 from .keys import ClsKeys as K
+from ...base import BaseTransform
+from ...base.predictor.io import ImageWriter, ImageReader
+from ....utils.fonts import PINGFANG_FONT_FILE_PATH
 from ....utils import logging
 
 __all__ = ["Topk", "NormalizeFeatures", "PrintResult", "SaveClsResults"]
@@ -171,7 +170,7 @@ class SaveClsResults(BaseTransform):
         file_name = os.path.basename(ori_path)
         save_path = os.path.join(self.save_dir, file_name)
 
-        image = Image.open(ori_path)
+        image = ImageReader(backend='pil').read(ori_path)
         image = image.convert('RGB')
         image_size = image.size
         draw = ImageDraw.Draw(image)

+ 3 - 3
paddlex/modules/object_detection/predictor/transforms.py

@@ -21,7 +21,7 @@ from PIL import Image, ImageDraw, ImageFont
 
 from .keys import DetKeys as K
 from ...base import BaseTransform
-from ...base.predictor.io.writers import ImageWriter
+from ...base.predictor.io import ImageWriter, ImageReader
 from ...base.predictor.transforms import image_functions as F
 from ...base.predictor.transforms.image_common import _BaseResize, _check_image_size
 from ....utils.fonts import PINGFANG_FONT_FILE_PATH
@@ -196,7 +196,7 @@ class SaveDetResults(BaseTransform):
         save_path = os.path.join(self.save_dir, file_name)
 
         labels = self.labels
-        image = Image.open(ori_path)
+        image = ImageReader(backend='pil').read(ori_path)
         if K.MASKS in data:
             image = draw_mask(
                 image,
@@ -230,7 +230,7 @@ class SaveDetResults(BaseTransform):
 class PadStride(BaseTransform):
     """ padding image for model with FPN , instead PadBatch(pad_to_stride, pad_gt) in original config
     Args:
-        stride (bool): model with FPN need image shape % stride == 0 
+        stride (bool): model with FPN need image shape % stride == 0
     """
 
     def __init__(self, stride=0):

+ 6 - 4
paddlex/modules/semantic_segmentation/evaluator.py

@@ -13,7 +13,9 @@
 # limitations under the License.
 
 
+import os
 from pathlib import Path
+
 from ..base import BaseEvaluator
 from .model_list import MODELS
 
@@ -51,7 +53,7 @@ class SegEvaluator(BaseEvaluator):
         Returns:
             dict: the arguments of evaluation function.
         """
-        return {
-            "weight_path": self.eval_config.weight_path,
-            "device": self.get_device(),
-        }
+        device = self.get_device()
+        # XXX:
+        os.environ.pop("FLAGS_npu_jit_compile", None)
+        return {"weight_path": self.eval_config.weight_path, "device": device}

+ 4 - 0
paddlex/modules/semantic_segmentation/predictor/predictor.py

@@ -31,6 +31,7 @@ class SegPredictor(BasePredictor):
     entities = MODELS
 
     def __init__(self,
+                 model_name,
                  model_dir,
                  kernel_option,
                  output,
@@ -38,6 +39,7 @@ class SegPredictor(BasePredictor):
                  post_transforms=None,
                  has_prob_map=False):
         super().__init__(
+            model_name=model_name,
             model_dir=model_dir,
             kernel_option=kernel_option,
             output=output,
@@ -65,6 +67,8 @@ class SegPredictor(BasePredictor):
 
     def _run(self, batch_input):
         """ run """
+        # XXX:
+        os.environ.pop("FLAGS_npu_jit_compile", None)
         images = [data[K.IMAGE] for data in batch_input]
         input_ = np.stack(images, axis=0)
         if input_.ndim == 3:

+ 2 - 0
paddlex/modules/semantic_segmentation/trainer.py

@@ -56,6 +56,8 @@ class SegTrainer(BaseTrainer):
             dict: the arguments of training function.
         """
         train_args = {"device": self.get_device()}
+        # XXX:
+        os.environ.pop("FLAGS_npu_jit_compile", None)
         if self.train_config.batch_size is not None:
             train_args["batch_size"] = self.train_config.batch_size
         if self.train_config.learning_rate is not None:

+ 2 - 0
paddlex/modules/table_recognition/predictor/predictor.py

@@ -31,6 +31,7 @@ class TableRecPredictor(BasePredictor):
     entities = MODELS
 
     def __init__(self,
+                 model_name,
                  model_dir,
                  kernel_option,
                  output,
@@ -38,6 +39,7 @@ class TableRecPredictor(BasePredictor):
                  post_transforms=None,
                  table_max_len=488):
         super().__init__(
+            model_name=model_name,
             model_dir=model_dir,
             kernel_option=kernel_option,
             output=output,

+ 2 - 4
paddlex/modules/table_recognition/predictor/transforms.py

@@ -16,16 +16,14 @@
 
 import os
 import os.path as osp
-
 import numpy as np
-from PIL import Image
 import cv2
 import paddle
 
-from ....utils import logging
+from .keys import TableRecKeys as K
 from ...base import BaseTransform
 from ...base.predictor.io.writers import ImageWriter
-from .keys import TableRecKeys as K
+from ....utils import logging
 
 __all__ = ['TableLabelDecode', 'TableMasterLabelDecode', 'SaveTableResults']
 

+ 17 - 13
paddlex/modules/text_recognition/predictor/transforms.py

@@ -83,15 +83,19 @@ class OCRReisizeNormImg(BaseTransform):
 class BaseRecLabelDecode(BaseTransform):
     """ Convert between text-label and text-index """
 
-    def __init__(self, character_str=None):
+    def __init__(self, character_str=None, use_space_char=True):
         self.reverse = False
-        dict_character = character_str if character_str is not None else "0123456789abcdefghijklmnopqrstuvwxyz"
+        character_list = list(
+            character_str) if character_str is not None else list(
+                "0123456789abcdefghijklmnopqrstuvwxyz")
+        if use_space_char:
+            character_list.append(" ")
 
-        dict_character = self.add_special_char(dict_character)
+        character_list = self.add_special_char(character_list)
         self.dict = {}
-        for i, char in enumerate(dict_character):
+        for i, char in enumerate(character_list):
             self.dict[char] = i
-        self.character = dict_character
+        self.character = character_list
 
     def pred_reverse(self, pred):
         """ pred_reverse """
@@ -110,9 +114,9 @@ class BaseRecLabelDecode(BaseTransform):
 
         return ''.join(pred_re[::-1])
 
-    def add_special_char(self, dict_character):
+    def add_special_char(self, character_list):
         """ add_special_char """
-        return dict_character
+        return character_list
 
     def decode(self, text_index, text_prob=None, is_remove_duplicate=False):
         """ convert text-index into text-label. """
@@ -179,10 +183,10 @@ class BaseRecLabelDecode(BaseTransform):
 class CTCLabelDecode(BaseRecLabelDecode):
     """ Convert between text-label and text-index """
 
-    def __init__(self, post_process_cfg=None):
+    def __init__(self, post_process_cfg=None, use_space_char=True):
         assert post_process_cfg['name'] == 'CTCLabelDecode'
-        character_str = post_process_cfg['character_dict']
-        super().__init__(character_str)
+        character_list = post_process_cfg['character_dict']
+        super().__init__(character_list, use_space_char=use_space_char)
 
     def apply(self, data):
         """ apply """
@@ -199,10 +203,10 @@ class CTCLabelDecode(BaseRecLabelDecode):
             data[K.REC_SCORE].append(t[1])
         return data
 
-    def add_special_char(self, dict_character):
+    def add_special_char(self, character_list):
         """ add_special_char """
-        dict_character = ['blank'] + dict_character
-        return dict_character
+        character_list = ['blank'] + character_list
+        return character_list
 
     @classmethod
     def get_input_keys(cls):

+ 28 - 28
paddlex/modules/ts_forecast/predictor.py

@@ -14,6 +14,7 @@
 
 
 from pathlib import Path
+import tarfile
 
 from typing import Union
 from ...utils import logging
@@ -27,18 +28,26 @@ class TSFCPredictor(BasePredictor):
     """ TS Forecast Model Predictor """
     entities = MODELS
 
-    def __init__(self, config):
-        """Initialize the instance.
-
-        Args:
-            config (AttrDict):  PaddleX pipeline config, which is loaded from pipeline yaml file.
+    def __init__(self, model_name, model_dir, kernel_option, output):
+        """initialize
         """
-        self.global_config = config.Global
-        self.predict_config = config.Predict
+        self.model_dir = self.uncompress_tar_file(model_dir)
 
+        self.device = kernel_option.get_device()
+        self.output = output
         config_path = self.get_config_path()
         self.pdx_config, self.pdx_model = build_model(
-            self.global_config.model, config_path=config_path)
+            model_name, config_path=config_path)
+
+    def uncompress_tar_file(self, model_dir):
+        """unpackage the tar file containing training outputs and update weight path
+        """
+        if tarfile.is_tarfile(model_dir):
+            dest_path = Path(model_dir).parent
+            with tarfile.open(model_dir, 'r') as tar:
+                tar.extractall(path=dest_path)
+            return dest_path / "best_accuracy.pdparams/best_model/model.pdparams"
+        return model_dir
 
     def get_config_path(self) -> Union[str, None]:
         """
@@ -48,33 +57,25 @@ class TSFCPredictor(BasePredictor):
             config_path (str): The path to the config
 
         """
-        model_dir = self.predict_config.model_dir
-        if Path(model_dir).exists():
-            config_path = Path(model_dir).parent.parent / "config.yaml"
+        if Path(self.model_dir).exists():
+            config_path = Path(self.model_dir).parent.parent / "config.yaml"
             if config_path.exists():
                 return config_path
             else:
                 logging.warning(
-                    f"The config file(`{config_path}`) related to model weight file(`{self.predict_config.model_dir}`) \
+                    f"The config file(`{config_path}`) related to model weight file(`{self.model_dir}`) \
 is not exist, use default instead.")
         else:
-            raise_model_not_found_error(model_dir)
+            raise_model_not_found_error(self.model_dir)
         return None
 
-    def predict(self, input=None, batch_size=1):
+    def predict(self, input):
         """execute model predict
-
-        Returns:
-            dict: the prediction results
-        """
-        results = self.predict()
-
-    def predict(self):
-        """predict using specified model
         """
         # self.update_config()
-        result = self.pdx_model.predict(**self.get_predict_kwargs())
+        result = self.pdx_model.predict(**input, **self.get_predict_kwargs())
         assert result.returncode == 0, f"Encountered an unexpected error({result.returncode}) in predicting!"
+        return result
 
     def get_predict_kwargs(self) -> dict:
         """get key-value arguments of model predict function
@@ -83,10 +84,9 @@ is not exist, use default instead.")
             dict: the arguments of predict function.
         """
         return {
-            "weight_path": self.predict_config.model_dir,
-            "input_path": self.predict_config.input_path,
-            "device": self.global_config.device,
-            "save_dir": self.global_config.output
+            "weight_path": self.model_dir,
+            "device": self.device,
+            "save_dir": self.output
         }
 
     def _get_post_transforms_from_config(self):
@@ -100,7 +100,7 @@ is not exist, use default instead.")
 
     def get_input_keys(self):
         """ get input keys """
-        pass
+        return ["input_path"]
 
     def get_output_keys(self):
         """ get output keys """

+ 0 - 21
paddlex/utils/config.py

@@ -97,27 +97,6 @@ def print_config(config):
     print_dict(config)
 
 
-# def check_config(config):
-#     """
-#     Check config
-#     """
-#     from . import check
-#     check.check_version()
-#     use_gpu = config.get('use_gpu', True)
-#     if use_gpu:
-#         check.check_gpu()
-#     architecture = config.get('ARCHITECTURE')
-#     #check.check_architecture(architecture)
-#     use_mix = config.get('use_mix', False)
-#     check.check_mix(architecture, use_mix)
-#     classes_num = config.get('classes_num')
-#     check.check_classes_num(classes_num)
-#     mode = config.get('mode', 'train')
-#     if mode.lower() == 'train':
-#         check.check_function_params(config, 'LEARNING_RATE')
-#         check.check_function_params(config, 'OPTIMIZER')
-
-
 def override(dl, ks, v):
     """
     Recursively replace dict of list

+ 2 - 2
paddlex/utils/device.py

@@ -30,8 +30,8 @@ def get_device(device_cfg, using_device_number=None):
             os.environ["FLAGS_npu_jit_compile"] = "0"
             os.environ["FLAGS_use_stride_kernel"] = "0"
             os.environ["FLAGS_allocator_strategy"] = "auto_growth"
-        elif device.lower() == "mlu":
-            os.environ["CUSTOM_DEVICE_BLACK_LIST"] = "set_value"
+            os.environ[
+                "CUSTOM_DEVICE_BLACK_LIST"] = "pad3d,pad3d_grad,set_value,set_value_with_tensor"
         if len(device_cfg.split(":")) == 2:
             device_ids = device_cfg.split(":")[1]
         else:

+ 1 - 1
setup.py

@@ -91,7 +91,7 @@ def check_paddle_version():
     """check paddle version
     """
     import paddle
-    supported_versions = ['2.6', '3.0', '0.0']
+    supported_versions = ['3.0', '0.0']
     version = paddle.__version__
     # Recognizable version number: major.minor.patch
     major, minor, patch = version.split('.')