Browse Source

Merge remote-tracking branch 'origin/develop' into deploy_jf

will-jl944 4 years ago
parent
commit
1a24bb2b37
63 changed files with 610 additions and 256 deletions
  1. 7 7
      deploy/cpp/docs/compile/openvino/openvino_windows.md
  2. 8 8
      deploy/cpp/docs/compile/paddle/linux.md
  3. 1 1
      deploy/cpp/docs/compile/paddle/windows.md
  4. 1 1
      deploy/cpp/docs/demo/decrypt_infer.md
  5. 1 1
      deploy/cpp/docs/demo/model_infer.md
  6. 2 2
      deploy/cpp/docs/demo/multi_gpu_model_infer.md
  7. 2 2
      deploy/cpp/docs/demo/tensorrt_infer.md
  8. 3 3
      deploy/cpp/docs/manufacture_sdk/README.md
  9. 1 1
      deploy/cpp/docs/models/paddlex.md
  10. 3 3
      docs/CHANGELOG.md
  11. 1 1
      docs/apis/export_model.md
  12. BIN
      docs/apis/images/detection_analysis.jpg
  13. BIN
      docs/apis/images/insect_bbox-allclass-allarea.png
  14. BIN
      docs/apis/images/insect_bbox_pr_curve(iou-0.5).png
  15. 2 2
      docs/apis/prediction.md
  16. 120 8
      docs/apis/visualize.md
  17. 1 1
      docs/install.md
  18. 4 4
      examples/C#_deploy/Program.cs
  19. 3 3
      examples/C#_deploy/README.md
  20. 1 1
      examples/defect_detection/README.md
  21. 1 1
      examples/defect_detection/code/train.py
  22. 8 8
      examples/meter_reader/README.md
  23. 2 2
      examples/meter_reader/deploy/cpp/meter_reader/meter_pipeline.yml
  24. 1 1
      examples/meter_reader/train_detection.py
  25. 1 1
      examples/robot_grab/README.md
  26. 1 1
      examples/robot_grab/code/train.py
  27. 56 32
      paddlex/cv/models/detector.py
  28. 241 1
      paddlex/cv/models/utils/det_metrics/coco_utils.py
  29. 2 12
      paddlex/cv/transforms/batch_operators.py
  30. 12 27
      paddlex/deploy.py
  31. 2 0
      paddlex/det.py
  32. 1 0
      static/deploy/openvino/src/paddlex.cpp
  33. 6 6
      tutorials/slim/prune/image_classification/mobilenetv2_prune.py
  34. 4 4
      tutorials/slim/prune/image_classification/mobilenetv2_train.py
  35. 6 6
      tutorials/slim/prune/object_detection/yolov3_prune.py
  36. 4 4
      tutorials/slim/prune/object_detection/yolov3_train.py
  37. 6 6
      tutorials/slim/prune/semantic_segmentation/unet_prune.py
  38. 4 4
      tutorials/slim/prune/semantic_segmentation/unet_train.py
  39. 1 1
      tutorials/slim/quantize/instance_segmentation/mask_rcnn_qat.py
  40. 1 1
      tutorials/slim/quantize/instance_segmentation/mask_rcnn_train.py
  41. 3 3
      tutorials/slim/quantize/semantic_segmentation/unet_qat.py
  42. 4 4
      tutorials/slim/quantize/semantic_segmentation/unet_train.py
  43. 4 4
      tutorials/train/image_classification/alexnet.py
  44. 4 4
      tutorials/train/image_classification/darknet53.py
  45. 4 4
      tutorials/train/image_classification/densenet121.py
  46. 4 4
      tutorials/train/image_classification/hrnet_w18_c.py
  47. 2 2
      tutorials/train/image_classification/mobilenetv3_large_w_custom_optimizer.py
  48. 4 4
      tutorials/train/image_classification/mobilenetv3_small.py
  49. 4 4
      tutorials/train/image_classification/resnet50_vd_ssld.py
  50. 4 4
      tutorials/train/image_classification/shufflenetv2.py
  51. 4 4
      tutorials/train/image_classification/xception41.py
  52. 4 4
      tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py
  53. 4 4
      tutorials/train/object_detection/faster_rcnn_hrnet_w18.py
  54. 4 4
      tutorials/train/object_detection/faster_rcnn_r50_fpn.py
  55. 4 4
      tutorials/train/object_detection/ppyolo.py
  56. 4 4
      tutorials/train/object_detection/ppyolotiny.py
  57. 4 4
      tutorials/train/object_detection/ppyolov2.py
  58. 4 4
      tutorials/train/object_detection/yolov3_darknet53.py
  59. 4 4
      tutorials/train/semantic_segmentation/bisenetv2.py
  60. 4 4
      tutorials/train/semantic_segmentation/deeplabv3p_resnet50_vd.py
  61. 4 4
      tutorials/train/semantic_segmentation/fastscnn.py
  62. 4 4
      tutorials/train/semantic_segmentation/hrnet.py
  63. 4 4
      tutorials/train/semantic_segmentation/unet.py

+ 7 - 7
deploy/cpp/docs/compile/openvino/openvino_windows.md

@@ -2,7 +2,7 @@
 
 本文档指引用户如何基于OpenVINO对飞桨模型进行推理,并编译执行。进行以下编译操作前请先安装好OpenVINO,OpenVINO安装请参考官网[OpenVINO-windows](https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.html)
 
-**注意:** 
+**注意:**
 
 - 我们测试的openvino版本为2021.3,如果你使用其它版本遇到问题,可以尝试切换到该版本
 - 当前检测模型转换为openvino格式是有问题的,暂时只支持分割和分类模型
@@ -81,14 +81,14 @@ git clone https://github.com/PaddlePaddle/PaddleX.git
 
 ### Step 4. 编译
 1. 打开Visual Studio 2019 Community,点击`继续但无需代码`
-   
+
    ![](../../images/vs2019_step1.png)
 
 2. 点击: `文件`->`打开`->`CMake`
 
 ![](../../images/vs2019_step2.png)
 
-选择C++预测代码所在路径(例如`D:\projects\PaddleX\dygraph\deploy\cpp`),并打开`CMakeList.txt`:
+选择C++预测代码所在路径(例如`D:\projects\PaddleX\deploy\cpp`),并打开`CMakeList.txt`:
 ![](../../images/vs2019_step3.png)
 
 3. 打开项目时,可能会自动构建。由于没有进行下面的依赖路径设置会报错,这个报错可以先忽略。
@@ -99,14 +99,14 @@ git clone https://github.com/PaddlePaddle/PaddleX.git
 4. 点击`浏览`,分别设置编译选项指定`gflag`、`OpenCV`、`OpenVINO`的路径(也可以点击右上角的“编辑 JSON”,直接修改json文件,然后保存点 项目->生成缓存)
 
    ![](../../images/vs2019_step5.png)
-   
+
   依赖库路径的含义说明如下,注意OpenVINO编译只需要勾选和填写以下参数即可:
 
 | 参数名     | 含义                                                                                                                                                |
 | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
 | WITH_OPENVINO  | 是否使用OpenVINO推理引擎,默认为False。 勾选或者json文件填写为True,表示编译OpenVINO推理引擎  |
 | OPENCV_DIR | OpenCV的安装路径,例如`D:\\projects\\opencv`   |
-| GFLAGS_DIR | gflag的路径,例如`D:\\projects\\PaddleX\\dygraph\\deploy\\cpp\\deps\\gflags` |
+| GFLAGS_DIR | gflag的路径,例如`D:\\projects\\PaddleX\\deploy\\cpp\\deps\\gflags` |
 | OPENVINO_DIR | OpenVINO的路径,例如`C:\\Program Files (x86)\\Intel\\openvino_2021\\inference_engine` |
 | NGRAPH_LIB | OpenVINO的ngraph路径,例如`C:\\Program Files (x86)\\Intel\\openvino_2021\\deployment_tools\\ngraph` |
 
@@ -121,13 +121,13 @@ git clone https://github.com/PaddlePaddle/PaddleX.git
 
 #### 编译环境无法联网导致编译失败?
 
-- 如果无法联网,请手动点击下载 [yaml-cpp.zip](https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip),无需解压,并修改`PaddleX\dygraph\deploy\cpp\cmake\yaml.cmake`中将`URL https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip` 中的网址替换为第3步中下载的路径,如改为`URL D:\projects\yaml-cpp.zip`。
+- 如果无法联网,请手动点击下载 [yaml-cpp.zip](https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip),无需解压,并修改`PaddleX\deploy\cpp\cmake\yaml.cmake`中将`URL https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip` 中的网址替换为第3步中下载的路径,如改为`URL D:\projects\yaml-cpp.zip`。
 - 一定要勾选WITH_OPENVINO选项, WITH_GPU、WITH_TENSORRT选项可以取消掉
 - 不支持debug编译,注意切换成Release
 
 ### Step5: 编译结果
 
-编译后会在`PaddleX/dygraph/deploy/cpp/build/demo`目录下生成`model_infer`可执行二进制文件示例,用于加载模型进行预测。以上面转换的ResNet50模型为例,运行指令如下:
+编译后会在`PaddleX/deploy/cpp/build/demo`目录下生成`model_infer`可执行二进制文件示例,用于加载模型进行预测。以上面转换的ResNet50模型为例,运行指令如下:
 
 ```
 ./model_infer.exe --xml_file openvino_model/resnet50/ResNet50_vd.xml --bin_file openvino_model/resnet50/ResNet50_vd.bin --cfg_file openvino_model/resnet50/resnet50_imagenet.yml --model_type clas --image test.jpeg

+ 8 - 8
deploy/cpp/docs/compile/paddle/linux.md

@@ -12,9 +12,9 @@ Ubuntu 16.04/18.04
 ### Step1: 获取部署代码
 ```
 git clone https://github.com/PaddlePaddle/PaddleX.git
-cd PaddleX/dygraph/deploy/cpp
+cd PaddleX/deploy/cpp
 ```
-**说明**:`C++`预测代码在`PaddleX/dygraph/deploy/cpp` 目录,该目录不依赖任何`PaddleX`下其他目录。所有的公共实现代码在`model_deploy`目录下,所有示例代码都在`demo`目录下。
+**说明**:`C++`预测代码在`PaddleX/deploy/cpp` 目录,该目录不依赖任何`PaddleX`下其他目录。所有的公共实现代码在`model_deploy`目录下,所有示例代码都在`demo`目录下。
 
 ### Step 2. 下载PaddlePaddle C++ 预测库
 PaddlePaddle C++ 预测库针对是否使用GPU、是否支持TensorRT、以及不同的CUDA版本提供了已经编译好的预测库,目前PaddleX支持Paddle预测库2.0+,最新2.1版本下载链接如下所示:
@@ -28,7 +28,7 @@ PaddlePaddle C++ 预测库针对是否使用GPU、是否支持TensorRT、以及
 
 请根据实际情况选择下载,如若以上版本不满足您的需求,请至[C++预测库下载列表](https://paddleinference.paddlepaddle.org.cn/v2.1/user_guides/download_lib.html)选择符合的版本。
 
-将预测库解压后,其所在目录(例如解压至`PaddleX/dygraph/deploy/cpp/paddle_inferenc/`)下主要包含的内容有:
+将预测库解压后,其所在目录(例如解压至`PaddleX/deploy/cpp/paddle_inferenc/`)下主要包含的内容有:
 
 ```
 ├── paddle/ # paddle核心库和头文件
@@ -39,7 +39,7 @@ PaddlePaddle C++ 预测库针对是否使用GPU、是否支持TensorRT、以及
 ```
 
 ### Step 3. 修改编译参数
-根据自己的系统环境,修改`PaddleX/dygraph/deploy/cpp/script/build.sh`脚本中的参数,主要修改的参数为以下几个
+根据自己的系统环境,修改`PaddleX/deploy/cpp/script/build.sh`脚本中的参数,主要修改的参数为以下几个
 | 参数          | 说明                                                                                 |
 | :------------ | :----------------------------------------------------------------------------------- |
 | WITH_GPU      | ON或OFF,表示是否使用GPU,当下载的为CPU预测库时,设为OFF                             |
@@ -52,7 +52,7 @@ PaddlePaddle C++ 预测库针对是否使用GPU、是否支持TensorRT、以及
 | OPENSSL_DIR    | OPENSSL所在路径,解密所需。默认为`PaddleX/deploy/cpp/deps/penssl-1.1.0k`目录下        |
 
 ### Step 4. 编译
-修改完build.sh后执行编译, **[注意]**: 以下命令在`PaddleX/dygraph/deploy/cpp`目录下进行执行
+修改完build.sh后执行编译, **[注意]**: 以下命令在`PaddleX/deploy/cpp`目录下进行执行
 
 ```
 sh script/build.sh
@@ -62,17 +62,17 @@ sh script/build.sh
 > 编译过程,会调用script/bootstrap.sh联网下载opencv、openssl,以及yaml依赖包,如无法联网,用户按照下操作手动下载
 >
 > 1. 根据系统版本,点击右侧链接下载不同版本的opencv依赖 [Ubuntu 16.04](https://bj.bcebos.com/paddleseg/deploy/opencv3.4.6gcc4.8ffmpeg.tar.gz2)/[Ubuntu 18.04](https://bj.bcebos.com/paddlex/deploy/opencv3.4.6gcc4.8ffmpeg_ubuntu_18.04.tar.gz2)
-> 2. 解压下载的opencv依赖(解压后目录名为opencv3.4.6gcc4.8ffmpeg),创建目录`PaddleX/dygraph/deploy/cpp/deps`,将解压后的目录拷贝至该创建的目录下
+> 2. 解压下载的opencv依赖(解压后目录名为opencv3.4.6gcc4.8ffmpeg),创建目录`PaddleX/deploy/cpp/deps`,将解压后的目录拷贝至该创建的目录下
 > 3. 点击[下载yaml依赖包](https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip),无需解压
 > 4. 修改`PaddleX/deploy/cpp/cmake/yaml.cmake`文件,将`URL https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip`中网址替换为第3步中下载的路径,如改为`URL /Users/Download/yaml-cpp.zip`
-> 5. 如果**开启加密**,点击[下载openssl](https://bj.bcebos.com/paddlex/tools/openssl-1.1.0k.tar.gz),将解压后的目录拷贝至跟opencv同级目录,即`PaddleX/dygraph/deploy/cpp/deps`目录。
+> 5. 如果**开启加密**,点击[下载openssl](https://bj.bcebos.com/paddlex/tools/openssl-1.1.0k.tar.gz),将解压后的目录拷贝至跟opencv同级目录,即`PaddleX/deploy/cpp/deps`目录。
 > 6. 重新执行`sh script/build.sh`即可编译
 
 
 
 ### Step 5. 编译结果
 
-编译后会在`PaddleX/dygraph/deploy/cpp/build/demo`目录下生成`model_infer`、`multi_gpu_model_infer`和`batch_infer`等几个可执行二进制文件示例,分别用于在单卡/多卡/多batch上加载模型进行预测,示例使用参考如下文档:
+编译后会在`PaddleX/deploy/cpp/build/demo`目录下生成`model_infer`、`multi_gpu_model_infer`和`batch_infer`等几个可执行二进制文件示例,分别用于在单卡/多卡/多batch上加载模型进行预测,示例使用参考如下文档:
 
 - [单卡加载模型预测示例](../../demo/model_infer.md)
 - [多卡加载模型预测示例](../../demo/multi_gpu_model_infer.md)

+ 1 - 1
deploy/cpp/docs/compile/paddle/windows.md

@@ -104,7 +104,7 @@ PaddlePaddle C++ 预测库针对是否使用GPU、是否支持TensorRT、以及
 
 ### Step5: 编译结果
 
-编译后会在`PaddleX/dygraph/deploy/cpp/build/demo`目录下生成`model_infer`和`multi_gpu_model_infer`两个可执行二进制文件示例,分别用于在单卡/多卡上加载模型进行预测,示例使用参考如下文档
+编译后会在`PaddleX/deploy/cpp/build/demo`目录下生成`model_infer`和`multi_gpu_model_infer`两个可执行二进制文件示例,分别用于在单卡/多卡上加载模型进行预测,示例使用参考如下文档
 
 - [单卡加载模型预测示例](../../demo/model_infer.md)
 - [多卡加载模型预测示例](../../demo/multi_gpu_model_infer.md)

+ 1 - 1
deploy/cpp/docs/demo/decrypt_infer.md

@@ -15,7 +15,7 @@
 - [PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.0/deploy/EXPORT_MODEL.md)
 - [PaddleSeg导出模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/v2.0/docs/model_export.md)
 - [PaddleClas导出模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/zh_CN/tutorials/getting_started.md#4-%E4%BD%BF%E7%94%A8inference%E6%A8%A1%E5%9E%8B%E8%BF%9B%E8%A1%8C%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86)
-- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)
+- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
 
 
 用户也可直接下载本教程中从PaddleDetection中导出的YOLOv3模型进行测试,[点击下载](https://bj.bcebos.com/paddlex/deploy2/models/yolov3_mbv1.tar.gz)。

+ 1 - 1
deploy/cpp/docs/demo/model_infer.md

@@ -12,7 +12,7 @@
 - [PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.0/deploy/EXPORT_MODEL.md)
 - [PaddleSeg导出模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/v2.0/docs/model_export.md)
 - [PaddleClas导出模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/zh_CN/tutorials/getting_started.md#4-%E4%BD%BF%E7%94%A8inference%E6%A8%A1%E5%9E%8B%E8%BF%9B%E8%A1%8C%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86)
-- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)
+- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
 
 
 用户也可直接下载本教程中从PaddleDetection中导出的YOLOv3模型进行测试,[点击下载](https://bj.bcebos.com/paddlex/deploy2/models/yolov3_mbv1.tar.gz)。

+ 2 - 2
deploy/cpp/docs/demo/multi_gpu_model_infer.md

@@ -1,6 +1,6 @@
 # 多GPU卡模型加载预测示例
 
-本文档说明`PaddleX/dygraph/deploy/cpp/demo/multi_gpu_model_infer.cpp`编译后的使用方法,仅供用户参考进行使用,开发者可基于此demo示例进行二次开发,满足集成的需求。
+本文档说明`PaddleX/deploy/cpp/demo/multi_gpu_model_infer.cpp`编译后的使用方法,仅供用户参考进行使用,开发者可基于此demo示例进行二次开发,满足集成的需求。
 
 在多卡上实现机制如下
 
@@ -22,7 +22,7 @@
 - [PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.0/deploy/EXPORT_MODEL.md)
 - [PaddleSeg导出模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/v2.0/docs/model_export.md)
 - [PaddleClas导出模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/zh_CN/tutorials/getting_started.md#4-%E4%BD%BF%E7%94%A8inference%E6%A8%A1%E5%9E%8B%E8%BF%9B%E8%A1%8C%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86)
-- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)
+- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
 
 
 

+ 2 - 2
deploy/cpp/docs/demo/tensorrt_infer.md

@@ -1,6 +1,6 @@
 # TensorRT加载模型预测
 
-本文档基于`PaddleX/dygraph/deploy/cpp/demo/tensorrt_infer.cpp`示例,讲述如何用PaddleInference引擎结合TensorRT部署模型。开发者可基于此demo示例进行二次开发,满足集成的需求。
+本文档基于`PaddleX/deploy/cpp/demo/tensorrt_infer.cpp`示例,讲述如何用PaddleInference引擎结合TensorRT部署模型。开发者可基于此demo示例进行二次开发,满足集成的需求。
 
 ## 步骤一、编译
 参考编译文档
@@ -14,7 +14,7 @@
 - [PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.0/deploy/EXPORT_MODEL.md)
 - [PaddleSeg导出模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/v2.0/docs/model_export.md)
 - [PaddleClas导出模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/zh_CN/tutorials/getting_started.md#4-%E4%BD%BF%E7%94%A8inference%E6%A8%A1%E5%9E%8B%E8%BF%9B%E8%A1%8C%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86)
-- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)
+- [PaddleX导出模型](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
 
 用户也可直接下载本教程中从PaddleDetection中导出的YOLOv3模型进行测试,[点击下载](https://bj.bcebos.com/paddlex/deploy2/models/yolov3_mbv1.tar.gz)。
 

+ 3 - 3
deploy/cpp/docs/manufacture_sdk/README.md

@@ -1,6 +1,6 @@
 # 工业级多端多平台预编译部署开发包
 
-PaddleX-Deploy全面升级,支持飞桨视觉套件PaddleX、PaddleDetection、PaddleClas、PaddleSeg的统一部署能力,端到端打通PaddleInference、PaddleLite、OpenVINO、Triton等多种高性能预测引擎,如果需要从**源码编译使用**,可至目录[PaddlePaddle模型C++部署](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)。
+PaddleX-Deploy全面升级,支持飞桨视觉套件PaddleX、PaddleDetection、PaddleClas、PaddleSeg的统一部署能力,端到端打通PaddleInference、PaddleLite、OpenVINO、Triton等多种高性能预测引擎,如果需要从**源码编译使用**,可至目录[PaddlePaddle模型C++部署](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp)。
 
 在工业部署的开发过程中,常常因环境问题导致在部署代码编译环节中耗费较多的时间和人力成本。如果产线上的业务逻辑稍微复杂一点,尤其是串联多个模型时,则需要在模型推理前插入预处理、中间结果处理等操作,如此复杂的逻辑对应的部署代码开发工程量是很大的。
 
@@ -17,7 +17,7 @@ PaddleX-Deploy全面升级,支持飞桨视觉套件PaddleX、PaddleDetection
 
 ## <h2 id="1">1 Manufactue SDK简介</h2>
 
-PaddleX Manufacture基于[PaddleX-Deploy](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)的端到端高性能部署能力,将应用深度学习模型的业务逻辑抽象成Pipeline,而接入深度学习模型前的数据前处理、模型预测、模型串联时的中间结果处理等操作都对应于Pipeline中的节点PipelineNode,用户只需在Pipeline配置文件中编排好各节点的前后关系,就可以给Pipeline发送数据并快速地获取相应的推理结果。Manufacture SDK的架构设计如下图所示:
+PaddleX Manufacture基于[PaddleX-Deploy](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp)的端到端高性能部署能力,将应用深度学习模型的业务逻辑抽象成Pipeline,而接入深度学习模型前的数据前处理、模型预测、模型串联时的中间结果处理等操作都对应于Pipeline中的节点PipelineNode,用户只需在Pipeline配置文件中编排好各节点的前后关系,就可以给Pipeline发送数据并快速地获取相应的推理结果。Manufacture SDK的架构设计如下图所示:
 
 <div align="center">
 <img src="images/pipeline_arch.png"  width = "500" />              </div>
@@ -112,7 +112,7 @@ version: 1.0.0
 
 ## <h2 id="5">使用Pipeline部署</h2>
 
-在部署之前,请确保已经进行了部署模型导出步骤。如果没有,请参考文档[部署模型导出](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)完成部署模型的导出。
+在部署之前,请确保已经进行了部署模型导出步骤。如果没有,请参考文档[部署模型导出](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)完成部署模型的导出。
 
 我们在SDK下载包里放置了demo目录,下面以该demo为例讲解如何使用Pipeline部署。
 

+ 1 - 1
deploy/cpp/docs/models/paddlex.md

@@ -5,7 +5,7 @@
 
 ## 步骤一 部署模型导出
 
-请参考[PaddlX模型导出文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)
+请参考[PaddlX模型导出文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
 
 
 ## 步骤二 编译

+ 3 - 3
docs/CHANGELOG.md

@@ -4,9 +4,9 @@
 
 - **2021.07.06 v2.0.0-rc3**
 
-  * PaddleX部署全面升级,支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的端到端统一部署能力。[使用教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)
-  * 全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署。[使用教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp/docs/manufacture_sdk)
-  * 发布产业实践案例:钢筋计数、缺陷检测、机械手抓取、工业表计读数、Windows系统下使用C#语言部署。[使用教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/examples)
+  * PaddleX部署全面升级,支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的端到端统一部署能力。[使用教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp)
+  * 全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署。[使用教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/docs/manufacture_sdk)
+  * 发布产业实践案例:钢筋计数、缺陷检测、机械手抓取、工业表计读数、Windows系统下使用C#语言部署。[使用教程](https://github.com/PaddlePaddle/PaddleX/tree/develop/examples)
   * 升级PaddleX GUI,支持30系列显卡、新增模型PP-YOLO V2、PP-YOLO Tiny 、BiSeNetV2。[使用教程](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/install.md#2-padldex-gui%E5%BC%80%E5%8F%91%E6%A8%A1%E5%BC%8F%E5%AE%89%E8%A3%85)
 
 

+ 1 - 1
docs/apis/export_model.md

@@ -2,7 +2,7 @@
 
 **注:所有涉及到模型部署,均需要参考本文档,进行部署模型导出**  
 
-在服务端部署模型时需要将训练过程中保存的模型导出为inference格式模型,导出的inference格式模型包括`model.pdmodel`、`model.pdiparams`、`model.pdiparams.info`、`model.yml`和`pipeline.yml`五个文件,分别表示模型的网络结构、模型权重、模型权重名称、模型的配置文件(包括数据预处理参数等)和可用于[PaddleX Manufacture SDK](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp/docs/manufacture_sdk)的流程配置文件。
+在服务端部署模型时需要将训练过程中保存的模型导出为inference格式模型,导出的inference格式模型包括`model.pdmodel`、`model.pdiparams`、`model.pdiparams.info`、`model.yml`和`pipeline.yml`五个文件,分别表示模型的网络结构、模型权重、模型权重名称、模型的配置文件(包括数据预处理参数等)和可用于[PaddleX Manufacture SDK](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/docs/manufacture_sdk)的流程配置文件。
 
 > **检查你的模型文件夹**,如果里面是`model.pdparams`、`model.pdopt`和`model.yml`3个文件时,那么就需要按照下面流程进行模型导出
 

BIN
docs/apis/images/detection_analysis.jpg


BIN
docs/apis/images/insect_bbox-allclass-allarea.png


BIN
docs/apis/images/insect_bbox_pr_curve(iou-0.5).png


+ 2 - 2
docs/apis/prediction.md

@@ -39,7 +39,7 @@ result = model.predict(test_jpg)
 pdx.det.visualize(test_jpg, result, threshold=0.3, save_dir='./')
 ```
 - YOLOv3模型predict接口[说明文档](./apis/models/detection.md#predict)
-- 可视化pdx.det.visualize接口[说明文档](https://github.com/PaddlePaddle/PaddleX/blob/d555d26f92cd6f8d3b940636bd7cb9043de93768/dygraph/paddlex/cv/models/utils/visualize.py#L25)
+- 可视化pdx.det.visualize接口[说明文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/models/utils/visualize.py#L25)
 > 注意:目标检测和实例分割模型在调用`predict`接口得到的结果需用户自行过滤低置信度结果,在`paddlex.det.visualize`接口中,我们提供了`threshold`用于过滤,置信度低于此值的结果将被过滤,不会可视化。
 ![](./images/yolo_predict.jpg)
 
@@ -78,5 +78,5 @@ pdx.seg.visualize(test_jpg, result, weight=0.0, save_dir='./')
 在上述示例代码中,通过调用`paddlex.seg.visualize`可以对语义分割的预测结果进行可视化,可视化的结果保存在`save_dir`下,见下图。其中`weight`参数用于调整预测结果和原图结果融合展现时的权重,0.0时只展示预测结果mask的可视化,1.0时只展示原图可视化。
 
 - DeepLabv3模型predict接口[说明文档](./apis/models/semantic_segmentation.md#predict)
-- 可视化pdx.seg.visualize接口[说明文档](https://github.com/PaddlePaddle/PaddleX/blob/d555d26f92cd6f8d3b940636bd7cb9043de93768/dygraph/paddlex/cv/models/utils/visualize.py#L50)
+- 可视化pdx.seg.visualize接口[说明文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/models/utils/visualize.py#L50)
 ![](images/deeplab_predict.jpg)

+ 120 - 8
docs/apis/visualize.md

@@ -3,9 +3,11 @@
 ## 目录
 
 * [paddlex.det.visualize](#1)
-* [paddlex.seg.visualize](#2)
-* [paddlex.visualize_det](#3)
-* [paddlex.visualize_seg](#4)
+* [paddlex.det.draw_pr_curve](#2)
+* [paddlex.det.coco_error_analysis](#3)
+* [paddlex.seg.visualize](#4)
+* [paddlex.visualize_det](#5)
+* [paddlex.visualize_seg](#6)
 
 
 ## <h2 id="1">paddlex.det.visualize</h2>
@@ -27,17 +29,127 @@ paddlex.det.visualize(image, result, threshold=0.5, save_dir='./', color=None)
 
 
 使用示例:
-```
+```python
 import paddlex as pdx
 model = pdx.load_model('xiaoduxiong_epoch_12')
 result = model.predict('./xiaoduxiong_epoch_12/xiaoduxiong.jpeg')
 pdx.det.visualize('./xiaoduxiong_epoch_12/xiaoduxiong.jpeg', result, save_dir='./')
 # 预测结果保存在./visualize_xiaoduxiong.jpeg
+```
+
+
+## <h2 id="2">paddlex.det.draw_pr_curve</h2>
+> 目标检测/实例分割准确率-召回率可视化
+```python
+paddlex.det.draw_pr_curve(eval_details_file=None, gt=None, pred_bbox=None, pred_mask=None, iou_thresh=0.5, save_dir='./')
+```
+将目标检测/实例分割模型评估结果中各个类别的准确率和召回率的对应关系进行可视化,同时可视化召回率和置信度阈值的对应关系。
+> 注:PaddleX在训练过程中保存的模型目录中,均包含`eval_result.json`文件,可将此文件路径传给`eval_details_file`参数,设定`iou_threshold`即可得到对应模型在验证集上的PR曲线图。
+
+### 参数
+> * **eval_details_file** (str): 模型评估结果的保存路径,包含真值信息和预测结果。默认值为None。
+> * **gt** (list): 数据集的真值信息。默认值为None。
+> * **pred_bbox** (list): 模型在数据集上的预测框。默认值为None。
+> * **pred_mask** (list): 模型在数据集上的预测mask。默认值为None。
+> * **iou_thresh** (float): 判断预测框或预测mask为真阳时的IoU阈值。默认值为0.5。
+> * **save_dir** (str): 可视化结果保存路径。默认值为'./'。
+
+**注意:**`eval_details_file`的优先级更高,只要`eval_details_file`不为None,就会从`eval_details_file`提取真值信息和预测结果做分析。当`eval_details_file`为None时,则用`gt`、`pred_mask`、`pred_mask`做分析。
+
+### 使用示例
+点击下载如下示例中的[模型](https://bj.bcebos.com/paddlex/2.0/faster_rcnn_e12.tar.gz)和[数据集](https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz)
+
+> 方式一:分析训练过程中保存的模型文件夹中的评估结果文件`eval_details.json`,例如[模型](https://bj.bcebos.com/paddlex/models/insect_epoch_270.zip)中的`eval_details.json`。
+```python
+import paddlex as pdx
+eval_details_file = 'faster_rcnn_e12/eval_details.json'
+pdx.det.draw_pr_curve(eval_details_file, save_dir='./insect')
+```
+> 方式二:分析模型评估函数返回的评估结果。
+
+```python
+import paddlex as pdx
 
+model = pdx.load_model('faster_rcnn_e12')
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='insect_det',
+    file_list='insect_det/val_list.txt',
+    label_list='insect_det/labels.txt',
+    transforms=model.test_transforms)
+metrics, evaluate_details = model.evaluate(eval_dataset, batch_size=1, return_details=True)
+gt = evaluate_details['gt']
+bbox = evaluate_details['bbox']
+pdx.det.draw_pr_curve(gt=gt, pred_bbox=bbox, save_dir='./insect')
 ```
 
+预测框的各个类别的准确率和召回率的对应关系、召回率和置信度阈值的对应关系可视化如下:
+![](./images/insect_bbox_pr_curve(iou-0.5).png)
 
-## <h2 id="2">paddlex.seg.visualize</h2>
+
+## <h2 id="3">paddlex.det.coco_error_analysis</h2>
+> 分析模型预测错误的原因
+
+```python
+paddlex.det.coco_error_analysis(eval_details_file=None, gt=None, pred_bbox=None, pred_mask=None, save_dir='./output')
+```
+逐个分析模型预测错误的原因,并将分析结果以图表的形式展示。分析结果图表示例如下:
+
+![](images/detection_analysis.jpg)
+
+左图显示的是`person`类的分析结果,有图显示的是所有类别整体的分析结果。
+
+分析图表展示了7条Precision-Recall(PR)曲线,每一条曲线表示的Average Precision (AP)比它左边那条高,原因是逐步放宽了评估要求。以`person`类为例,各条PR曲线的评估要求解释如下:
+
+* C75: 在IoU设置为0.75时的PR曲线, AP为0.510。
+* C50: 在IoU设置为0.5时的PR曲线,AP为0.724。C50与C75之间的白色区域面积代表将IoU从0.75放宽至0.5带来的AP增益。
+* Loc: 在IoU设置为0.1时的PR曲线,AP为0.832。Loc与C50之间的蓝色区域面积代表将IoU从0.5放宽至0.1带来的AP增益。蓝色区域面积越大,表示越多的检测框位置不够精准。
+* Sim: 在Loc的基础上,如果检测框与真值框的类别不相同,但两者同属于一个亚类,则不认为该检测框是错误的,在这种评估要求下的PR曲线, AP为0.832。Sim与Loc之间的红色区域面积越大,表示子类间的混淆程度越高。
+* Oth: 在Sim的基础上,如果检测框与真值框的亚类不相同,则不认为该检测框是错误的,在这种评估要求下的PR曲线,AP为0.841。Oth与Sim之间的绿色区域面积越大,表示亚类间的混淆程度越高。
+* BG: 在Oth的基础上,背景区域上的检测框不认为是错误的,在这种评估要求下的PR曲线,AP为91.1。BG与Oth之间的紫色区域面积越大,表示背景区域被误检的数量越多。
+* FN: 在BG的基础上,漏检的真值框不认为是错误的,在这种评估要求下的PR曲线,AP为1.00。FN与BG之间的橙色区域面积越大,表示漏检的真值框数量越多。
+
+更为详细的说明参考[COCODataset官网给出分析工具说明](https://cocodataset.org/#detection-eval)
+
+### 参数
+> * **eval_details_file** (str): 模型评估结果的保存路径,包含真值信息和预测结果。默认值为None。
+> * **gt** (list): 数据集的真值信息。默认值为None。
+> * **pred_bbox** (list): 模型在数据集上的预测框。默认值为None。
+> * **pred_mask** (list): 模型在数据集上的预测mask。默认值为None。
+> * **save_dir** (str): 可视化结果保存路径。默认值为'./output'。
+
+**注意:**`eval_details_file`的优先级更高,只要`eval_details_file`不为None,就会从`eval_details_file`提取真值信息和预测结果做分析。当`eval_details_file`为None时,则用`gt`、`pred_mask`、`pred_mask`做分析。
+
+### 使用示例
+点击下载如下示例中的[模型](https://bj.bcebos.com/paddlex/models/insect_epoch_270.zip)和[数据集](https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz)
+
+> 方式一:分析训练过程中保存的模型文件夹中的评估结果文件`eval_details.json`,例如[模型](https://bj.bcebos.com/paddlex/models/insect_epoch_270.zip)中的`eval_details.json`。
+```python
+import paddlex as pdx
+eval_details_file = 'insect_epoch_270/eval_details.json'
+pdx.det.coco_error_analysis(eval_details_file, save_dir='./insect')
+```
+> 方式二:分析模型评估函数返回的评估结果。
+
+```python
+import paddlex as pdx
+
+model = pdx.load_model('insect_epoch_270')
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='insect_det',
+    file_list='insect_det/val_list.txt',
+    label_list='insect_det/labels.txt',
+    transforms=model.test_transforms)
+metrics, evaluate_details = model.evaluate(eval_dataset, batch_size=8, return_details=True)
+gt = evaluate_details['gt']
+bbox = evaluate_details['bbox']
+pdx.det.coco_error_analysis(gt=gt, pred_bbox=bbox, save_dir='./insect')
+```
+所有类别整体的分析结果示例如下:
+
+![](./images/insect_bbox-allclass-allarea.png)
+
+
+## <h2 id="4">paddlex.seg.visualize</h2>
 
 ```python
 paddlex.seg.visualize(image, result, weight=0.6, save_dir='./', color=None)
@@ -56,7 +168,7 @@ paddlex.seg.visualize(image, result, weight=0.6, save_dir='./', color=None)
 
 使用示例:
 
-```
+```python
 import paddlex as pdx
 model = pdx.load_model('cityscape_deeplab')
 result = model.predict('city.png')
@@ -65,10 +177,10 @@ pdx.seg.visualize('city.png', result, save_dir='./')
 ```
 
 
-## <h2 id="3">paddlex.visualize_det</h2>
+## <h2 id="5">paddlex.visualize_det</h2>
 
 > 是paddlex.det.visualize的别名,接口说明同 [paddlex.det.visualize](./visualize.md#paddlex.det.visualize)
 
-## <h2 id="4">paddlex.visualize_seg</h2>
+## <h2 id="6">paddlex.visualize_seg</h2>
 
 > 是paddlex.seg.visualize的别名,接口说明同 [paddlex.seg.visualize](./visualize.md#paddlex.seg.visualize)

+ 1 - 1
docs/install.md

@@ -58,7 +58,7 @@ github代码会跟随开发进度不断更新,可以安装develop分支的代
 
 ```
 git clone https://github.com/PaddlePaddle/PaddleX.git
-cd PaddleX/dygraph
+cd PaddleX
 pip install -r requirements.txt
 python setup.py install
 ```

+ 4 - 4
examples/C#_deploy/Program.cs

@@ -19,11 +19,11 @@ namespace ConsoleApp2
 
         static void Main(string[] args)
         {
-            string imgfile = "E:\\PaddleX_deploy\\PaddleX\\dygraph\\deploy\\cpp\\out\\paddle_deploy\\1.png";
+            string imgfile = "E:\\PaddleX_deploy\\PaddleX\\deploy\\cpp\\out\\paddle_deploy\\1.png";
             string model_type = "det";
-            string model_filename = "E:\\PaddleX_deploy\\PaddleX\\dygraph\\deploy\\cpp\\out\\paddle_deploy\\yolov3_darknet53_270e_coco1\\model.pdmodel";
-            string params_filename = "E:\\PaddleX_deploy\\PaddleX\\dygraph\\deploy\\cpp\\out\\paddle_deploy\\yolov3_darknet53_270e_coco1\\model.pdiparams";
-            string cfg_file = "E:\\PaddleX_deploy\\PaddleX\\dygraph\\deploy\\cpp\\out\\paddle_deploy\\yolov3_darknet53_270e_coco1\\infer_cfg.yml";
+            string model_filename = "E:\\PaddleX_deploy\\PaddleX\\deploy\\cpp\\out\\paddle_deploy\\yolov3_darknet53_270e_coco1\\model.pdmodel";
+            string params_filename = "E:\\PaddleX_deploy\\PaddleX\\deploy\\cpp\\out\\paddle_deploy\\yolov3_darknet53_270e_coco1\\model.pdiparams";
+            string cfg_file = "E:\\PaddleX_deploy\\PaddleX\\deploy\\cpp\\out\\paddle_deploy\\yolov3_darknet53_270e_coco1\\infer_cfg.yml";
 
 
             InitModel(model_type, model_filename, params_filename, cfg_file);

+ 3 - 3
examples/C#_deploy/README.md

@@ -39,11 +39,11 @@
 
 ```
 git clone https://github.com/PaddlePaddle/PaddleX
-cd dygraph
+cd PaddleX
 ```
 
 
-使用Cmake进行编译,我们主要对`PaddleX/dygraph/deploy/cpp`中代码进行编译,并创建`out`文件夹用来承接编译生成的内容,
+使用Cmake进行编译,我们主要对`PaddleX/deploy/cpp`中代码进行编译,并创建`out`文件夹用来承接编译生成的内容,
 
 <div align="center">
 <img src="./images/2.png"  width = "800" />              </div>
@@ -87,7 +87,7 @@ cd dygraph
 
 ### 3.2 修改model_infer.cpp并重新生成dll
 
-* 修改后的model_infer.cpp已经提供,位于[model_infer.cpp](./model_infer.cpp)。请将[model_infer.cpp](./model_infer.cpp)替换[deploy/cpp/demodeploy/cpp/demo](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp/demo)中的model_infer.cpp,或者参考[model_infer.cpp](./model_infer.cpp)修改自己的model_infer.cpp。
+* 修改后的model_infer.cpp已经提供,位于[model_infer.cpp](./model_infer.cpp)。请将[model_infer.cpp](./model_infer.cpp)替换[deploy/cpp/demodeploy/cpp/demo](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/demo)中的model_infer.cpp,或者参考[model_infer.cpp](./model_infer.cpp)修改自己的model_infer.cpp。
 
 ### 3.3 创建一个c#项目并调用dll
 

+ 1 - 1
examples/defect_detection/README.md

@@ -70,7 +70,7 @@ eval_transforms = T.Compose([
 
 ```bash
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='dataset/JPEGImages',
     ann_file='dataset/train.json',

+ 1 - 1
examples/defect_detection/code/train.py

@@ -18,7 +18,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='dataset/JPEGImages',
     ann_file='dataset/train.json',

+ 8 - 8
examples/meter_reader/README.md

@@ -343,7 +343,7 @@ paddlex --export_inference --model_dir=output/ppyolov2_r50vd_dcn/best_model --sa
 paddlex --export_inference --model_dir=output/deeplabv3p_r50vd/best_model --save_dir=meter_seg_model
 ```
 
-如果部署时需要使用TensorRT,导出模型的时候需要固定模型的输入大小,具体导出流程参考[部署模型导出](https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/export_model.md)。
+如果部署时需要使用TensorRT,导出模型的时候需要固定模型的输入大小,具体导出流程参考[部署模型导出](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)。
 
 ## <h2 id="8">8 Windows环境下模型部署</h2>
 
@@ -354,7 +354,7 @@ paddlex --export_inference --model_dir=output/deeplabv3p_r50vd/best_model --save
 | [meter_det_model](https://bj.bcebos.com/paddlex/examples2/meter_reader/meter_det_model.tar.gz) | [meter_seg_model](https://bj.bcebos.com/paddlex/examples2/meter_reader//meter_seg_model.tar.gz) |
 
 
-这里我们基于[PaddleX Manufature SDK](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp/docs/manufacture_sdk)进行部署。
+这里我们基于[PaddleX Manufature SDK](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/docs/manufacture_sdk)进行部署。
 
 ### 环境依赖
 
@@ -417,7 +417,7 @@ git clone https://github.com/PaddlePaddle/PaddleX.git
 
 ![](./images/step5_2-1.png)
 
-选择表计读数C++预测代码所在路径(例如`D:\projects\PaddleX\dygraph/examples/meter_reader/deploy/cpp/meter_reader`),并打开`CMakeList.txt`:
+选择表计读数C++预测代码所在路径(例如`D:\projects\PaddleX\examples/meter_reader/deploy/cpp/meter_reader`),并打开`CMakeList.txt`:
 ![](./images/step5_2-2.png)
 
 3. 打开项目时,可能会自动构建。由于没有进行下面的依赖路径设置会报错,这个报错可以先忽略。
@@ -449,9 +449,9 @@ git clone https://github.com/PaddlePaddle/PaddleX.git
 
 ### Step6: 编译结果
 
-编译后会在`D:\projects\PaddleX\dygraph\examples\meter_reader\out\build\x64-Release`目录下生成`meter_reader.exe`二进制文件。
+编译后会在`D:\projects\PaddleX\examples\meter_reader\out\build\x64-Release`目录下生成`meter_reader.exe`二进制文件。
 
-使用PaddleXManufacture所需要的流程配置文件位于`PaddleX\dygraph\examples\meter_reader\meter_pipeline.yml`,打开该配置文件,修改检测模型和分割模型所在路径:
+使用PaddleXManufacture所需要的流程配置文件位于`PaddleX\examples\meter_reader\meter_pipeline.yml`,打开该配置文件,修改检测模型和分割模型所在路径:
 
 | 修改检测模型所在路径,并设置`use_gpu`和`use_trt`为true | 修改分割模型所在路径,并设置`use_gpu`和`use_trt`为true |
 | -- | -- |
@@ -461,7 +461,7 @@ git clone https://github.com/PaddlePaddle/PaddleX.git
 
 打开CMD终端,运行表计读数的可执行文件,进行推理预测:
 ```
-cd PaddleX\dygraph\examples\meter_reader\deploy\cpp\meter_reader\
+cd PaddleX\examples\meter_reader\deploy\cpp\meter_reader\
 .\out\build\x64-Release\meter_reader.exe --pipeline_cfg meter_pipeline.yml --image 20190822_168.jpg
 ```
 执行后终端会输出预测结果:
@@ -476,10 +476,10 @@ Meter 1: 1.05576932
 Meter 2: 6.21739101
 ```
 
-在检测模型可视化的预测结果保存在`PaddleX\dygraph\examples\meter_reader\deploy\cpp\meter_reader\out\build\x64-Release\output_det`,可以点击进行查看:
+在检测模型可视化的预测结果保存在`PaddleX\examples\meter_reader\deploy\cpp\meter_reader\out\build\x64-Release\output_det`,可以点击进行查看:
 ![](./images/20190822_168.jpg)
 
-在分割模型可视化的预测结果保存在`PaddleX\dygraph\examples\meter_reader\deploy\cpp\meter_reader\out\build\x64-Release\output_seg`,可以点击进行查看:
+在分割模型可视化的预测结果保存在`PaddleX\examples\meter_reader\deploy\cpp\meter_reader\out\build\x64-Release\output_seg`,可以点击进行查看:
 | 表1可视化分割结果 | 表2可视化分割结果|
 | -- | -- |
 | ![](./images/20190822_168_06-30-17-09-33-217.jpg) | ![](20190822_168_06-30-17-09-33-213.jpg) |

+ 2 - 2
examples/meter_reader/deploy/cpp/meter_reader/meter_pipeline.yml

@@ -12,7 +12,7 @@ pipeline_nodes:
 - modelpredict0:
     type: Predict
     init_params:
-      model_dir: /paddle/PaddleX/dygraph/examples/meter_reader/det_inference/inference_model
+      model_dir: /paddle/PaddleX/examples/meter_reader/det_inference/inference_model
       gpu_id: 0
       use_gpu: true
       use_trt: false
@@ -41,7 +41,7 @@ pipeline_nodes:
 - modelpredict1:
     type: Predict
     init_params:
-      model_dir: /paddle/PaddleX/dygraph/examples/meter_reader/seg_inference/inference_model
+      model_dir: /paddle/PaddleX/examples/meter_reader/seg_inference/inference_model
       gpu_id: 0
       use_gpu: true
       use_trt: false

+ 1 - 1
examples/meter_reader/train_detection.py

@@ -23,7 +23,7 @@ meter_det_dataset = 'https://bj.bcebos.com/paddlex/examples/meter_reader/dataset
 pdx.utils.download_and_decompress(meter_det_dataset, path='./')
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='meter_det/train/',
     ann_file='meter_det/annotations/instance_train.json',

+ 1 - 1
examples/robot_grab/README.md

@@ -94,7 +94,7 @@ eval_transforms = T.Compose([
 
 ```bash
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='dataset/JPEGImages',
     ann_file='dataset/train.json',

+ 1 - 1
examples/robot_grab/code/train.py

@@ -18,7 +18,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='dataset/JPEGImages',
     ann_file='dataset/train.json',

+ 56 - 32
paddlex/cv/models/detector.py

@@ -511,13 +511,8 @@ class BaseDetector(BaseModel):
         batch_transforms = self._compose_batch_transform(transforms, 'test')
         batch_samples = batch_transforms(batch_samples)
         if to_tensor:
-            if isinstance(batch_samples, dict):
-                for k in batch_samples:
-                    batch_samples[k] = paddle.to_tensor(batch_samples[k])
-            else:
-                for sample in batch_samples:
-                    for k in sample:
-                        sample[k] = paddle.to_tensor(sample[k])
+            for k in batch_samples:
+                batch_samples[k] = paddle.to_tensor(batch_samples[k])
 
         return batch_samples
 
@@ -987,18 +982,6 @@ class FasterRCNN(BaseDetector):
         super(FasterRCNN, self).__init__(
             model_name='FasterRCNN', num_classes=num_classes, **params)
 
-    def run(self, net, inputs, mode):
-        if mode in ['train', 'eval']:
-            outputs = net(inputs)
-        else:
-            outputs = []
-            for sample in inputs:
-                net_out = net(sample)
-                for key in net_out:
-                    net_out[key] = net_out[key].numpy()
-                outputs.append(net_out)
-        return outputs
-
     def _compose_batch_transform(self, transforms, mode='train'):
         if mode == 'train':
             default_batch_transforms = [
@@ -1022,8 +1005,7 @@ class FasterRCNN(BaseDetector):
 
         batch_transforms = BatchCompose(
             custom_batch_transforms + default_batch_transforms,
-            collate_batch=collate_batch,
-            return_list=mode == 'test')
+            collate_batch=collate_batch)
 
         return batch_transforms
 
@@ -1069,13 +1051,6 @@ class FasterRCNN(BaseDetector):
         self.fixed_input_shape = image_shape
         return self._define_input_spec(image_shape)
 
-    def _postprocess(self, batch_pred):
-        prediction = [
-            super(FasterRCNN, self)._postprocess(pred)[0]
-            for pred in batch_pred
-        ]
-        return prediction
-
 
 class PPYOLO(YOLOv3):
     def __init__(self,
@@ -1245,6 +1220,31 @@ class PPYOLO(YOLOv3):
         self.downsample_ratios = downsample_ratios
         self.model_name = 'PPYOLO'
 
+    def _get_test_inputs(self, image_shape):
+        if image_shape is not None:
+            image_shape = self._check_image_shape(image_shape)
+            self._fix_transforms_shape(image_shape[-2:])
+        else:
+            image_shape = [None, 3, 608, 608]
+            if hasattr(self, 'test_transforms'):
+                if self.test_transforms is not None:
+                    for idx, op in enumerate(self.test_transforms.transforms):
+                        name = op.__class__.__name__
+                        if name == 'Resize':
+                            image_shape = [None, 3] + list(
+                                self.test_transforms.transforms[
+                                    idx].target_size)
+            logging.warning(
+                '[Important!!!] When exporting inference model for {},'.format(
+                    self.__class__.__name__) +
+                ' if fixed_input_shape is not set, it will be forcibly set to {}. '.
+                format(image_shape) +
+                'Please check image shape after transforms is {}, if not, fixed_input_shape '.
+                format(image_shape[1:]) + 'should be specified manually.')
+
+        self.fixed_input_shape = image_shape
+        return self._define_input_spec(image_shape)
+
 
 class PPYOLOTiny(YOLOv3):
     def __init__(self,
@@ -1353,6 +1353,31 @@ class PPYOLOTiny(YOLOv3):
         self.downsample_ratios = downsample_ratios
         self.model_name = 'PPYOLOTiny'
 
+    def _get_test_inputs(self, image_shape):
+        if image_shape is not None:
+            image_shape = self._check_image_shape(image_shape)
+            self._fix_transforms_shape(image_shape[-2:])
+        else:
+            image_shape = [None, 3, 320, 320]
+            if hasattr(self, 'test_transforms'):
+                if self.test_transforms is not None:
+                    for idx, op in enumerate(self.test_transforms.transforms):
+                        name = op.__class__.__name__
+                        if name == 'Resize':
+                            image_shape = [None, 3] + list(
+                                self.test_transforms.transforms[
+                                    idx].target_size)
+            logging.warning(
+                '[Important!!!] When exporting inference model for {},'.format(
+                    self.__class__.__name__) +
+                ' if fixed_input_shape is not set, it will be forcibly set to {}. '.
+                format(image_shape) +
+                'Please check image shape after transforms is {}, if not, fixed_input_shape '.
+                format(image_shape[1:]) + 'should be specified manually.')
+
+        self.fixed_input_shape = image_shape
+        return self._define_input_spec(image_shape)
+
 
 class PPYOLOv2(YOLOv3):
     def __init__(self,
@@ -1505,7 +1530,7 @@ class PPYOLOv2(YOLOv3):
         return self._define_input_spec(image_shape)
 
 
-class MaskRCNN(FasterRCNN):
+class MaskRCNN(BaseDetector):
     def __init__(self,
                  num_classes=80,
                  backbone='ResNet50_vd',
@@ -1740,7 +1765,7 @@ class MaskRCNN(FasterRCNN):
                 'mask_post_process': mask_post_process
             })
         self.with_fpn = with_fpn
-        super(FasterRCNN, self).__init__(
+        super(MaskRCNN, self).__init__(
             model_name='MaskRCNN', num_classes=num_classes, **params)
 
     def _compose_batch_transform(self, transforms, mode='train'):
@@ -1766,8 +1791,7 @@ class MaskRCNN(FasterRCNN):
 
         batch_transforms = BatchCompose(
             custom_batch_transforms + default_batch_transforms,
-            collate_batch=collate_batch,
-            return_list=mode == 'test')
+            collate_batch=collate_batch)
 
         return batch_transforms
 

+ 241 - 1
paddlex/cv/models/utils/det_metrics/coco_utils.py

@@ -18,6 +18,8 @@ from __future__ import print_function
 
 import sys
 import copy
+import os
+import os.path as osp
 import numpy as np
 import itertools
 from paddlex.ppdet.metrics.map_utils import draw_pr_curve
@@ -131,7 +133,7 @@ def cocoapi_eval(anns,
         results_flatten = list(itertools.chain(*results_per_category))
         headers = ['category', 'AP'] * (num_columns // 2)
         results_2d = itertools.zip_longest(
-            * [results_flatten[i::num_columns] for i in range(num_columns)])
+            *[results_flatten[i::num_columns] for i in range(num_columns)])
         table_data = [headers]
         table_data += [result for result in results_2d]
         table = AsciiTable(table_data)
@@ -215,3 +217,241 @@ def loadRes(coco_obj, anns):
     res.dataset['annotations'] = anns
     res.createIndex()
     return res
+
+
+def makeplot(rs, ps, outDir, class_name, iou_type):
+    import matplotlib.pyplot as plt
+    cs = np.vstack([
+        np.ones((2, 3)),
+        np.array([0.31, 0.51, 0.74]),
+        np.array([0.75, 0.31, 0.30]),
+        np.array([0.36, 0.90, 0.38]),
+        np.array([0.50, 0.39, 0.64]),
+        np.array([1, 0.6, 0]),
+    ])
+    areaNames = ['allarea', 'small', 'medium', 'large']
+    types = ['C75', 'C50', 'Loc', 'Sim', 'Oth', 'BG', 'FN']
+    for i in range(len(areaNames)):
+        area_ps = ps[..., i, 0]
+        figure_title = iou_type + '-' + class_name + '-' + areaNames[i]
+        aps = [ps_.mean() for ps_ in area_ps]
+        ps_curve = [
+            ps_.mean(axis=1) if ps_.ndim > 1 else ps_ for ps_ in area_ps
+        ]
+        ps_curve.insert(0, np.zeros(ps_curve[0].shape))
+        fig = plt.figure()
+        ax = plt.subplot(111)
+        for k in range(len(types)):
+            ax.plot(rs, ps_curve[k + 1], color=[0, 0, 0], linewidth=0.5)
+            ax.fill_between(
+                rs,
+                ps_curve[k],
+                ps_curve[k + 1],
+                color=cs[k],
+                label=str(f'[{aps[k]:.3f}]' + types[k]), )
+        plt.xlabel('recall')
+        plt.ylabel('precision')
+        plt.xlim(0, 1.0)
+        plt.ylim(0, 1.0)
+        plt.title(figure_title)
+        plt.legend()
+        # plt.show()
+        fig.savefig(osp.join(outDir, f'{figure_title}.png'))
+        plt.close(fig)
+
+
+def analyze_individual_category(k, cocoDt, cocoGt, catId, iou_type,
+                                areas=None):
+    """针对某个特定类别,分析忽略亚类混淆和类别混淆时的准确率。
+
+           Refer to https://github.com/open-mmlab/mmdetection/blob/master/tools/coco_error_analysis.py
+
+           Args:
+               k (int): 待分析类别的序号。
+               cocoDt (pycocotols.coco.COCO): 按COCO类存放的预测结果。
+               cocoGt (pycocotols.coco.COCO): 按COCO类存放的真值。
+               catId (int): 待分析类别在数据集中的类别id。
+               iou_type (str): iou计算方式,若为检测框,则设置为'bbox',若为像素级分割结果,则设置为'segm'。
+
+           Returns:
+               int:
+               dict: 有关键字'ps_supercategory'和'ps_allcategory'。关键字'ps_supercategory'的键值是忽略亚类间
+                   混淆时的准确率,关键字'ps_allcategory'的键值是忽略类别间混淆时的准确率。
+
+        """
+
+    # matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
+    # or matplotlib.backends is imported for the first time
+    # pycocotools import matplotlib
+    import matplotlib
+    matplotlib.use('Agg')
+    from pycocotools.coco import COCO
+    from pycocotools.cocoeval import COCOeval
+
+    nm = cocoGt.loadCats(catId)[0]
+    print(f'--------------analyzing {k + 1}-{nm["name"]}---------------')
+    ps_ = {}
+    dt = copy.deepcopy(cocoDt)
+    nm = cocoGt.loadCats(catId)[0]
+    imgIds = cocoGt.getImgIds()
+    dt_anns = dt.dataset['annotations']
+    select_dt_anns = []
+    for ann in dt_anns:
+        if ann['category_id'] == catId:
+            select_dt_anns.append(ann)
+    dt.dataset['annotations'] = select_dt_anns
+    dt.createIndex()
+    # compute precision but ignore superclass confusion
+    gt = copy.deepcopy(cocoGt)
+    child_catIds = gt.getCatIds(supNms=[nm['supercategory']])
+    for idx, ann in enumerate(gt.dataset['annotations']):
+        if ann['category_id'] in child_catIds and ann['category_id'] != catId:
+            gt.dataset['annotations'][idx]['ignore'] = 1
+            gt.dataset['annotations'][idx]['iscrowd'] = 1
+            gt.dataset['annotations'][idx]['category_id'] = catId
+    cocoEval = COCOeval(gt, copy.deepcopy(dt), iou_type)
+    cocoEval.params.imgIds = imgIds
+    cocoEval.params.maxDets = [100]
+    cocoEval.params.iouThrs = [0.1]
+    cocoEval.params.useCats = 1
+    if areas:
+        cocoEval.params.areaRng = [[0**2, areas[2]], [0**2, areas[0]],
+                                   [areas[0], areas[1]], [areas[1], areas[2]]]
+    cocoEval.evaluate()
+    cocoEval.accumulate()
+    ps_supercategory = cocoEval.eval['precision'][0, :, k, :, :]
+    ps_['ps_supercategory'] = ps_supercategory
+    # compute precision but ignore any class confusion
+    gt = copy.deepcopy(cocoGt)
+    for idx, ann in enumerate(gt.dataset['annotations']):
+        if ann['category_id'] != catId:
+            gt.dataset['annotations'][idx]['ignore'] = 1
+            gt.dataset['annotations'][idx]['iscrowd'] = 1
+            gt.dataset['annotations'][idx]['category_id'] = catId
+    cocoEval = COCOeval(gt, copy.deepcopy(dt), iou_type)
+    cocoEval.params.imgIds = imgIds
+    cocoEval.params.maxDets = [100]
+    cocoEval.params.iouThrs = [0.1]
+    cocoEval.params.useCats = 1
+    if areas:
+        cocoEval.params.areaRng = [[0**2, areas[2]], [0**2, areas[0]],
+                                   [areas[0], areas[1]], [areas[1], areas[2]]]
+    cocoEval.evaluate()
+    cocoEval.accumulate()
+    ps_allcategory = cocoEval.eval['precision'][0, :, k, :, :]
+    ps_['ps_allcategory'] = ps_allcategory
+    return k, ps_
+
+
+def coco_error_analysis(eval_details_file=None,
+                        gt=None,
+                        pred_bbox=None,
+                        pred_mask=None,
+                        save_dir='./output'):
+    """逐个分析模型预测错误的原因,并将分析结果以图表的形式展示。
+       分析结果说明参考COCODataset官网给出分析工具说明https://cocodataset.org/#detection-eval。
+
+       Refer to https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/coco_error_analysis.py
+
+       Args:
+           eval_details_file (str):  模型评估结果的保存路径,包含真值信息和预测结果。
+           gt (list): 数据集的真值信息。默认值为None。
+           pred_bbox (list): 模型在数据集上的预测框。默认值为None。
+           pred_mask (list): 模型在数据集上的预测mask。默认值为None。
+           save_dir (str): 可视化结果保存路径。默认值为'./output'。
+
+        Note:
+           eval_details_file的优先级更高,只要eval_details_file不为None,
+           就会从eval_details_file提取真值信息和预测结果做分析。
+           当eval_details_file为None时,则用gt、pred_mask、pred_mask做分析。
+
+    """
+
+    import multiprocessing as mp
+    # matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
+    # or matplotlib.backends is imported for the first time
+    # pycocotools import matplotlib
+    import matplotlib
+    matplotlib.use('Agg')
+    from pycocotools.coco import COCO
+    from pycocotools.cocoeval import COCOeval
+
+    if eval_details_file is not None:
+        import json
+        with open(eval_details_file, 'r') as f:
+            eval_details = json.load(f)
+            pred_bbox = eval_details['bbox']
+            if 'mask' in eval_details:
+                pred_mask = eval_details['mask']
+            gt = eval_details['gt']
+    if gt is None or pred_bbox is None:
+        raise Exception(
+            "gt/pred_bbox/pred_mask is None now, please set right eval_details_file or gt/pred_bbox/pred_mask."
+        )
+    if pred_bbox is not None and len(pred_bbox) == 0:
+        raise Exception("There is no predicted bbox.")
+    if pred_mask is not None and len(pred_mask) == 0:
+        raise Exception("There is no predicted mask.")
+
+    def _analyze_results(cocoGt, cocoDt, res_type, out_dir):
+        directory = osp.dirname(osp.join(out_dir, ''))
+        if not osp.exists(directory):
+            logging.info('-------------create {}-----------------'.format(
+                out_dir))
+            os.makedirs(directory)
+
+        imgIds = cocoGt.getImgIds()
+        res_out_dir = osp.join(out_dir, res_type, '')
+        res_directory = os.path.dirname(res_out_dir)
+        if not os.path.exists(res_directory):
+            logging.info('-------------create {}-----------------'.format(
+                res_out_dir))
+            os.makedirs(res_directory)
+        iou_type = res_type
+        cocoEval = COCOeval(
+            copy.deepcopy(cocoGt), copy.deepcopy(cocoDt), iou_type)
+        cocoEval.params.imgIds = imgIds
+        cocoEval.params.iouThrs = [.75, .5, .1]
+        cocoEval.params.maxDets = [100]
+        cocoEval.evaluate()
+        cocoEval.accumulate()
+        ps = cocoEval.eval['precision']
+        ps = np.vstack([ps, np.zeros((4, *ps.shape[1:]))])
+        catIds = cocoGt.getCatIds()
+        recThrs = cocoEval.params.recThrs
+        thread_num = mp.cpu_count() if mp.cpu_count() < 8 else 8
+        thread_pool = mp.pool.ThreadPool(thread_num)
+        args = [(k, cocoDt, cocoGt, catId, iou_type)
+                for k, catId in enumerate(catIds)]
+        analyze_results = thread_pool.starmap(analyze_individual_category,
+                                              args)
+        for k, catId in enumerate(catIds):
+            nm = cocoGt.loadCats(catId)[0]
+            logging.info('--------------saving {}-{}---------------'.format(
+                k + 1, nm['name']))
+            analyze_result = analyze_results[k]
+            assert k == analyze_result[0], ""
+            ps_supercategory = analyze_result[1]['ps_supercategory']
+            ps_allcategory = analyze_result[1]['ps_allcategory']
+            # compute precision but ignore superclass confusion
+            ps[3, :, k, :, :] = ps_supercategory
+            # compute precision but ignore any class confusion
+            ps[4, :, k, :, :] = ps_allcategory
+            # fill in background and false negative errors and plot
+            ps[ps == -1] = 0
+            ps[5, :, k, :, :] = ps[4, :, k, :, :] > 0
+            ps[6, :, k, :, :] = 1.0
+            makeplot(recThrs, ps[:, :, k], res_out_dir, nm['name'], iou_type)
+        makeplot(recThrs, ps, res_out_dir, 'allclass', iou_type)
+
+    coco_gt = COCO()
+    coco_gt.dataset = gt
+    coco_gt.createIndex()
+
+    if pred_bbox is not None:
+        coco_dt = loadRes(coco_gt, pred_bbox)
+        _analyze_results(coco_gt, coco_dt, res_type='bbox', out_dir=save_dir)
+    if pred_mask is not None:
+        coco_dt = loadRes(coco_gt, pred_mask)
+        _analyze_results(coco_gt, coco_dt, res_type='segm', out_dir=save_dir)
+    logging.info("The analysis figures are saved in {}".format(save_dir))

+ 2 - 12
paddlex/cv/transforms/batch_operators.py

@@ -26,14 +26,10 @@ from paddlex.utils import logging
 
 
 class BatchCompose(Transform):
-    def __init__(self,
-                 batch_transforms=None,
-                 collate_batch=True,
-                 return_list=False):
+    def __init__(self, batch_transforms=None, collate_batch=True):
         super(BatchCompose, self).__init__()
         self.batch_transforms = batch_transforms
         self.collate_batch = collate_batch
-        self.return_list = return_list
 
     def __call__(self, samples):
         if self.batch_transforms is not None:
@@ -55,13 +51,7 @@ class BatchCompose(Transform):
                 if k in sample:
                     sample.pop(k)
 
-        if self.return_list:
-            batch_data = [{
-                k: np.expand_dims(
-                    sample[k], axis=0)
-                for k in sample
-            } for sample in samples]
-        elif self.collate_batch:
+        if self.collate_batch:
             batch_data = default_collate_fn(samples)
         else:
             batch_data = {}

+ 12 - 27
paddlex/deploy.py

@@ -166,16 +166,10 @@ class Predictor(object):
                     'score_map': s
                 } for l, s in zip(label_map, score_map)]
         elif self._model.model_type == 'detector':
-            if 'RCNN' in self._model.__class__.__name__:
-                net_outputs = [{
-                    k: v
-                    for k, v in zip(['bbox', 'bbox_num', 'mask'], res)
-                } for res in net_outputs]
-            else:
-                net_outputs = {
-                    k: v
-                    for k, v in zip(['bbox', 'bbox_num', 'mask'], net_outputs)
-                }
+            net_outputs = {
+                k: v
+                for k, v in zip(['bbox', 'bbox_num', 'mask'], net_outputs)
+            }
             preds = self._model._postprocess(net_outputs)
             if len(preds) == 1:
                 preds = preds[0]
@@ -210,25 +204,16 @@ class Predictor(object):
         preprocessed_input = self.preprocess(images, transforms)
         self.timer.preprocess_time_s.end(iter_num=len(images))
 
-        ori_shape = None
         self.timer.inference_time_s.start()
-        if 'RCNN' in self._model.__class__.__name__:
-            if len(preprocessed_input) > 1:
-                logging.warning(
-                    "{} only supports inference with batch size equal to 1."
-                    .format(self._model.__class__.__name__))
-            net_outputs = [
-                self.raw_predict(sample) for sample in preprocessed_input
-            ]
-            self.timer.inference_time_s.end(iter_num=len(images))
-        else:
-            net_outputs = self.raw_predict(preprocessed_input)
-            self.timer.inference_time_s.end(iter_num=1)
-            ori_shape = preprocessed_input.get('ori_shape', None)
+        net_outputs = self.raw_predict(preprocessed_input)
+        self.timer.inference_time_s.end(iter_num=1)
 
         self.timer.postprocess_time_s.start()
         results = self.postprocess(
-            net_outputs, topk, ori_shape=ori_shape, transforms=transforms)
+            net_outputs,
+            topk,
+            ori_shape=preprocessed_input.get('ori_shape', None),
+            transforms=transforms)
         self.timer.postprocess_time_s.end(iter_num=len(images))
 
         return results
@@ -259,11 +244,11 @@ class Predictor(object):
         else:
             images = img_file
 
-        for step in range(warmup_iters):
+        for _ in range(warmup_iters):
             self._run(images=images, topk=topk, transforms=transforms)
         self.timer.reset()
 
-        for step in range(repeats):
+        for _ in range(repeats):
             results = self._run(
                 images=images, topk=topk, transforms=transforms)
 

+ 2 - 0
paddlex/det.py

@@ -15,6 +15,7 @@
 import sys
 from . import cv
 from .cv.models.utils.visualize import visualize_detection, draw_pr_curve
+from .cv.models.utils.det_metrics.coco_utils import coco_error_analysis
 
 message = 'Your running script needs PaddleX<2.0.0, please refer to {} to solve this issue.'.format(
     'https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#%E7%89%88%E6%9C%AC%E5%8D%87%E7%BA%A7'
@@ -31,6 +32,7 @@ def __getattr__(attr):
 
 visualize = visualize_detection
 draw_pr_curve = draw_pr_curve
+coco_error_analysis = coco_error_analysis
 
 # detection
 YOLOv3 = cv.models.YOLOv3

+ 1 - 0
static/deploy/openvino/src/paddlex.cpp

@@ -205,6 +205,7 @@ bool Model::predict(const cv::Mat& im, DetResult* result) {
       result->boxes.push_back(std::move(box));
     }
   }
+  return true;
 }
 
 

+ 6 - 6
tutorials/slim/prune/image_classification/mobilenetv2_prune.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -33,17 +33,17 @@ eval_dataset = pdx.datasets.ImageNet(
 model = pdx.load_model('output/mobilenet_v2/best_model')
 
 # Step 1/3: 分析模型各层参数在不同的剪裁比例下的敏感度
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md#analyze_sensitivity
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md#analyze_sensitivity
 model.analyze_sensitivity(
     dataset=eval_dataset, save_dir='output/mobilenet_v2/prune')
 
 # Step 2/3: 根据选择的FLOPs减小比例对模型进行剪裁
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md#prune
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md#prune
 model.prune(pruned_flops=.2)
 
 # Step 3/3: 对剪裁后的模型重新训练
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md#train
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md#train
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/slim/prune/image_classification/mobilenetv2_train.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.MobileNetV2(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 6 - 6
tutorials/slim/prune/object_detection/yolov3_prune.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.MixupImage(mixup_epoch=250), T.RandomDistort(),
     T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
@@ -23,7 +23,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -42,19 +42,19 @@ eval_dataset = pdx.datasets.VOCDetection(
 model = pdx.load_model('output/yolov3_darknet53/best_model')
 
 # Step 1/3: 分析模型各层参数在不同的剪裁比例下的敏感度
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md#analyze_sensitivity
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md#analyze_sensitivity
 model.analyze_sensitivity(
     dataset=eval_dataset,
     batch_size=1,
     save_dir='output/yolov3_darknet53/prune')
 
 # Step 2/3: 根据选择的FLOPs减小比例对模型进行剪裁
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md#prune
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md#prune
 model.prune(pruned_flops=.2)
 
 # Step 3/3: 对剪裁后的模型重新训练
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md#train
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md#train
 model.train(
     num_epochs=270,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/slim/prune/object_detection/yolov3_train.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.MixupImage(mixup_epoch=250), T.RandomDistort(),
     T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
@@ -23,7 +23,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -43,8 +43,8 @@ eval_dataset = pdx.datasets.VOCDetection(
 num_classes = len(train_dataset.labels)
 model = pdx.det.YOLOv3(num_classes=num_classes, backbone='DarkNet53')
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=270,
     train_dataset=train_dataset,

+ 6 - 6
tutorials/slim/prune/semantic_segmentation/unet_prune.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -40,17 +40,17 @@ eval_dataset = pdx.datasets.SegDataset(
 model = pdx.load_model('output/unet/best_model')
 
 # Step 1/3: 分析模型各层参数在不同的剪裁比例下的敏感度
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md#analyze_sensitivity
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md#analyze_sensitivity
 model.analyze_sensitivity(
     dataset=eval_dataset, batch_size=1, save_dir='output/unet/prune')
 
 # Step 2/3: 根据选择的FLOPs减小比例对模型进行剪裁
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md#prune
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md#prune
 model.prune(pruned_flops=.2)
 
 # Step 3/3: 对剪裁后的模型重新训练
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md#train
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md#train
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/slim/prune/semantic_segmentation/unet_train.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.UNet(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 1 - 1
tutorials/slim/quantize/instance_segmentation/mask_rcnn_qat.py

@@ -22,7 +22,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='xiaoduxiong_ins_det/JPEGImages',
     ann_file='xiaoduxiong_ins_det/train.json',

+ 1 - 1
tutorials/slim/quantize/instance_segmentation/mask_rcnn_train.py

@@ -22,7 +22,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/cv/datasets/coco.py#L26
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='xiaoduxiong_ins_det/JPEGImages',
     ann_file='xiaoduxiong_ins_det/train.json',

+ 3 - 3
tutorials/slim/quantize/semantic_segmentation/unet_qat.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -40,7 +40,7 @@ eval_dataset = pdx.datasets.SegDataset(
 model = pdx.load_model('output/unet/best_model')
 
 # 在线量化
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md#quant_aware_train
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md#quant_aware_train
 model.quant_aware_train(
     num_epochs=5,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/slim/quantize/semantic_segmentation/unet_train.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.UNet(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/alexnet.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.AlexNet(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/darknet53.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.DarkNet53(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/densenet121.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.DenseNet121(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/hrnet_w18_c.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.HRNet_W18_C(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 2 - 2
tutorials/train/image_classification/mobilenetv3_large_w_custom_optimizer.py

@@ -7,7 +7,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -16,7 +16,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',

+ 4 - 4
tutorials/train/image_classification/mobilenetv3_small.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.MobileNetV3_small(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/resnet50_vd_ssld.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.ResNet50_vd_ssld(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/shufflenetv2.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.ShuffleNetV2(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/image_classification/xception41.py

@@ -6,7 +6,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose(
     [T.RandomCrop(crop_size=224), T.RandomHorizontalFlip(), T.Normalize()])
 
@@ -15,7 +15,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -34,8 +34,8 @@ eval_dataset = pdx.datasets.ImageNet(
 num_classes = len(train_dataset.labels)
 model = pdx.cls.Xception41(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/classification.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/classification.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/tree/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.RandomResizeByShort(
         short_sizes=[640, 672, 704, 736, 768, 800],
@@ -22,7 +22,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='xiaoduxiong_ins_det/JPEGImages',
     ann_file='xiaoduxiong_ins_det/train.json',
@@ -39,8 +39,8 @@ num_classes = len(train_dataset.labels)
 model = pdx.det.MaskRCNN(
     num_classes=num_classes, backbone='ResNet50', with_fpn=True)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/instance_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/instance_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/object_detection/faster_rcnn_hrnet_w18.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.RandomResizeByShort(
         short_sizes=[640, 672, 704, 736, 768, 800],
@@ -22,7 +22,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -42,8 +42,8 @@ eval_dataset = pdx.datasets.VOCDetection(
 num_classes = len(train_dataset.labels)
 model = pdx.det.FasterRCNN(num_classes=num_classes, backbone='HRNet_W18')
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=24,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/object_detection/faster_rcnn_r50_fpn.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.RandomResizeByShort(
         short_sizes=[640, 672, 704, 736, 768, 800],
@@ -22,7 +22,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -43,8 +43,8 @@ num_classes = len(train_dataset.labels)
 model = pdx.det.FasterRCNN(
     num_classes=num_classes, backbone='ResNet50', with_fpn=True)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/object_detection/ppyolo.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.MixupImage(mixup_epoch=-1), T.RandomDistort(),
     T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
@@ -23,7 +23,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -43,8 +43,8 @@ eval_dataset = pdx.datasets.VOCDetection(
 num_classes = len(train_dataset.labels)
 model = pdx.det.PPYOLO(num_classes=num_classes, backbone='ResNet50_vd_dcn')
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=200,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/object_detection/ppyolotiny.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.MixupImage(mixup_epoch=-1), T.RandomDistort(),
     T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
@@ -23,7 +23,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -43,8 +43,8 @@ eval_dataset = pdx.datasets.VOCDetection(
 num_classes = len(train_dataset.labels)
 model = pdx.det.PPYOLOTiny(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=550,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/object_detection/ppyolov2.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.MixupImage(mixup_epoch=-1), T.RandomDistort(),
     T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
@@ -26,7 +26,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -46,8 +46,8 @@ eval_dataset = pdx.datasets.VOCDetection(
 num_classes = len(train_dataset.labels)
 model = pdx.det.PPYOLOv2(num_classes=num_classes, backbone='ResNet50_vd_dcn')
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=170,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/object_detection/yolov3_darknet53.py

@@ -6,7 +6,7 @@ dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.MixupImage(mixup_epoch=250), T.RandomDistort(),
     T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
@@ -23,7 +23,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -43,8 +43,8 @@ eval_dataset = pdx.datasets.VOCDetection(
 num_classes = len(train_dataset.labels)
 model = pdx.det.YOLOv3(num_classes=num_classes, backbone='DarkNet53')
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/detection.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop//docs/apis/models/detection.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop//docs/parameters.md
 model.train(
     num_epochs=270,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/semantic_segmentation/bisenetv2.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.BiSeNetV2(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/semantic_segmentation/deeplabv3p_resnet50_vd.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.DeepLabV3P(num_classes=num_classes, backbone='ResNet50_vd')
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/semantic_segmentation/fastscnn.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.FastSCNN(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/semantic_segmentation/hrnet.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.HRNet(num_classes=num_classes, width=48)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 4 - 4
tutorials/train/semantic_segmentation/unet.py

@@ -6,7 +6,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/transforms/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
 train_transforms = T.Compose([
     T.Resize(target_size=512),
     T.RandomHorizontalFlip(),
@@ -21,7 +21,7 @@ eval_transforms = T.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/datasets.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,8 +41,8 @@ eval_dataset = pdx.datasets.SegDataset(
 num_classes = len(train_dataset.labels)
 model = pdx.seg.UNet(num_classes=num_classes)
 
-# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/apis/models/semantic_segmentation.md
-# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/docs/parameters.md
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/models/semantic_segmentation.md
+# 各参数介绍与调整说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/parameters.md
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,