Explorar o código

Merge pull request #215 from PaddlePaddle/jason

Jason
Jason %!s(int64=5) %!d(string=hai) anos
pai
achega
3851b68c9b
Modificáronse 30 ficheiros con 241 adicións e 50 borrados
  1. 8 8
      docs/apis/datasets.md
  2. 2 0
      docs/apis/transforms/cls_transforms.md
  3. 2 0
      docs/apis/transforms/det_transforms.md
  4. 2 0
      docs/apis/transforms/seg_transforms.md
  5. 9 7
      docs/quick_start.md
  6. 4 4
      docs/train/classification.md
  7. 3 3
      docs/train/instance_segmentation.md
  8. 6 6
      docs/train/object_detection.md
  9. 6 6
      docs/train/semantic_segmentation.md
  10. 4 2
      paddlex/__init__.py
  11. 11 1
      tutorials/train/image_classification/alexnet.py
  12. 9 0
      tutorials/train/image_classification/mobilenetv2.py
  13. 9 0
      tutorials/train/image_classification/mobilenetv3_small_ssld.py
  14. 9 0
      tutorials/train/image_classification/resnet50_vd_ssld.py
  15. 9 0
      tutorials/train/image_classification/shufflenetv2.py
  16. 9 1
      tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py
  17. 9 1
      tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py
  18. 9 1
      tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py
  19. 9 1
      tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py
  20. 12 0
      tutorials/train/object_detection/faster_rcnn_r18_fpn.py
  21. 12 0
      tutorials/train/object_detection/faster_rcnn_r50_fpn.py
  22. 11 0
      tutorials/train/object_detection/yolov3_darknet53.py
  23. 11 0
      tutorials/train/object_detection/yolov3_mobilenetv1.py
  24. 11 0
      tutorials/train/object_detection/yolov3_mobilenetv3.py
  25. 10 1
      tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py
  26. 9 1
      tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2_x0.25.py
  27. 9 1
      tutorials/train/semantic_segmentation/deeplabv3p_xception65.py
  28. 9 4
      tutorials/train/semantic_segmentation/fast_scnn.py
  29. 9 1
      tutorials/train/semantic_segmentation/hrnet.py
  30. 9 1
      tutorials/train/semantic_segmentation/unet.py

+ 8 - 8
docs/apis/datasets.md

@@ -7,7 +7,7 @@ paddlex.datasets.ImageNet(data_dir, file_list, label_list, transforms=None, num_
 ```
 读取ImageNet格式的分类数据集,并对样本进行相应的处理。ImageNet数据集格式的介绍可查看文档:[数据集格式说明](../data/format/index.html)  
 
-示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/classification/mobilenetv2.py#L25)
+示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/mobilenetv2.py)
 
 > **参数**
 
@@ -20,15 +20,15 @@ paddlex.datasets.ImageNet(data_dir, file_list, label_list, transforms=None, num_
 > > * **parallel_method** (str): 数据集中样本在预处理过程中并行处理的方式,支持'thread'线程和'process'进程两种方式。默认为'process'(Windows和Mac下会强制使用thread,该参数无效)。  
 > > * **shuffle** (bool): 是否需要对数据集中样本打乱顺序。默认为False。  
 
-## paddlex.datasets.PascalVOC
+## paddlex.datasets.VOCDetection
 > **用于目标检测模型**  
 ```
-paddlex.datasets.PascalVOC(data_dir, file_list, label_list, transforms=None, num_workers=‘auto’, buffer_size=100, parallel_method='thread', shuffle=False)
+paddlex.datasets.VOCDetection(data_dir, file_list, label_list, transforms=None, num_workers=‘auto’, buffer_size=100, parallel_method='thread', shuffle=False)
 ```
 
 > 读取PascalVOC格式的检测数据集,并对样本进行相应的处理。PascalVOC数据集格式的介绍可查看文档:[数据集格式说明](../data/format/index.html)  
 
-> 示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/yolov3_darknet53.py#L29)
+> 示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/yolov3_darknet53.py)
 
 > **参数**
 
@@ -41,15 +41,15 @@ paddlex.datasets.PascalVOC(data_dir, file_list, label_list, transforms=None, num
 > > * **parallel_method** (str): 数据集中样本在预处理过程中并行处理的方式,支持'thread'线程和'process'进程两种方式。默认为'process'(Windows和Mac下会强制使用thread,该参数无效)。  
 > > * **shuffle** (bool): 是否需要对数据集中样本打乱顺序。默认为False。  
 
-## paddlex.datasets.MSCOCO
+## paddlex.datasets.CocoDetection
 > **用于实例分割/目标检测模型**  
 ```
-paddlex.datasets.MSCOCO(data_dir, ann_file, transforms=None, num_workers='auto', buffer_size=100, parallel_method='thread', shuffle=False)
+paddlex.datasets.CocoDetection(data_dir, ann_file, transforms=None, num_workers='auto', buffer_size=100, parallel_method='thread', shuffle=False)
 ```
 
 > 读取MSCOCO格式的检测数据集,并对样本进行相应的处理,该格式的数据集同样可以应用到实例分割模型的训练中。MSCOCO数据集格式的介绍可查看文档:[数据集格式说明](../data/format/index.html)  
 
-> 示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/mask_rcnn_r50_fpn.py#L27)
+> 示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py)
 
 > **参数**
 
@@ -69,7 +69,7 @@ paddlex.datasets.SegDataset(data_dir, file_list, label_list, transforms=None, nu
 
 > 读取语义分割任务数据集,并对样本进行相应的处理。语义分割任务数据集格式的介绍可查看文档:[数据集格式说明](../data/format/index.html)  
 
-> 示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/segmentation/unet.py#L27)
+> 示例:[代码文件](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/unet.py)
 
 > **参数**
 

+ 2 - 0
docs/apis/transforms/cls_transforms.md

@@ -122,6 +122,7 @@ paddlex.cls.transforms.RandomDistort(brightness_range=0.9, brightness_prob=0.5,
 * **hue_range** (int): 色调因子的范围。默认为18。
 * **hue_prob** (float): 随机调整色调的概率。默认为0.5。
 
+<!--
 ## ComposedClsTransforms
 ```python
 paddlex.cls.transforms.ComposedClsTransforms(mode, crop_size=[224, 224], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], random_horizontal_flip=True)
@@ -183,3 +184,4 @@ eval_transforms = transforms.Composed([
 		transforms.Normalize()
 ])
 ```
+-->

+ 2 - 0
docs/apis/transforms/det_transforms.md

@@ -168,6 +168,7 @@ paddlex.det.transforms.RandomCrop(aspect_ratio=[.5, 2.], thresholds=[.0, .1, .3,
 * **allow_no_crop** (bool): 是否允许未进行裁剪。默认值为True。
 * **cover_all_box** (bool): 是否要求所有的真实标注框都必须在裁剪区域内。默认值为False。
 
+<!--
 ## ComposedRCNNTransforms
 ```python
 paddlex.det.transforms.ComposedRCNNTransforms(mode, min_max_size=[224, 224], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], random_horizontal_flip=True)
@@ -302,3 +303,4 @@ eval_transforms = transforms.Composed([
 		transforms.Normalize()
 ])
 ```
+-->

+ 2 - 0
docs/apis/transforms/seg_transforms.md

@@ -167,6 +167,7 @@ paddlex.seg.transforms.RandomDistort(brightness_range=0.5, brightness_prob=0.5,
 * **hue_range** (int): 色调因子的范围。默认为18。
 * **hue_prob** (float): 随机调整色调的概率。默认为0.5。
 
+<!--
 ## ComposedSegTransforms
 ```python
 paddlex.det.transforms.ComposedSegTransforms(mode, min_max_size=[400, 600], train_crop_shape=[769, 769], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], random_horizontal_flip=True)
@@ -228,3 +229,4 @@ eval_transforms = transforms.Composed([
         transforms.Normalize()
 ])
 ```
+-->

+ 9 - 7
docs/quick_start.md

@@ -1,6 +1,8 @@
 # 10分钟快速上手使用
 
-本文档在一个小数据集上展示了如何通过PaddleX进行训练,您可以阅读PaddleX的**使用教程**来了解更多模型任务的训练使用方式。本示例同步在AIStudio上,可直接[在线体验模型训练](https://aistudio.baidu.com/aistudio/projectdetail/439860)
+本文档在一个小数据集上展示了如何通过PaddleX进行训练。本示例同步在AIStudio上,可直接[在线体验模型训练](https://aistudio.baidu.com/aistudio/projectdetail/450220)。  
+
+本示例代码源于Github [tutorials/train/classification/mobilenetv3_small_ssld.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/mobilenetv3_small_ssld.py),用户可自行下载至本地运行。  
 
 PaddleX中的所有模型训练跟随以下3个步骤,即可快速完成训练代码开发!
 
@@ -35,7 +37,7 @@ tar xzvf vegetables_cls.tar.gz
 <a name="定义训练验证图像处理流程transforms"></a>
 **3. 定义训练/验证图像处理流程transforms**  
 
-由于训练时数据增强操作的加入,因此模型在训练和验证过程中,数据处理流程需要分别进行定义。如下所示,代码在`train_transforms`中加入了[RandomCrop](apis/transforms/cls_transforms.html#RandomCrop)和[RandomHorizontalFlip](apis/transforms/cls_transforms.html#RandomHorizontalFlip)两种数据增强方式, 更多方法可以参考[数据增强文档](apis/transforms/augment.md)。
+由于训练时数据增强操作的加入,因此模型在训练和验证过程中,数据处理流程需要分别进行定义。如下所示,代码在`train_transforms`中加入了[RandomCrop](apis/transforms/cls_transforms.html#randomcrop)和[RandomHorizontalFlip](apis/transforms/cls_transforms.html#randomhorizontalflip)两种数据增强方式, 更多方法可以参考[数据增强文档](apis/transforms/augment.md)。
 ```
 from paddlex.cls import transforms
 train_transforms = transforms.Compose([
@@ -54,7 +56,7 @@ eval_transforms = transforms.Compose([
 **4. 定义`dataset`加载图像分类数据集**  
 
 定义数据集,`pdx.datasets.ImageNet`表示读取ImageNet格式的分类数据集
-- [paddlex.datasets.ImageNet接口说明](apis/datasets/classification.md)
+- [paddlex.datasets.ImageNet接口说明](apis/datasets.md)
 - [ImageNet数据格式说明](data/format/classification.md)
 
 ```
@@ -118,7 +120,7 @@ Predict Result: Predict Result: [{'score': 0.9999393, 'category': 'bocai', 'cate
 
 <a name="更多使用教程"></a>
 **更多使用教程**
-- 1.[目标检测模型训练](tutorials/train/detection.md)
-- 2.[语义分割模型训练](tutorials/train/segmentation.md)
-- 3.[实例分割模型训练](tutorials/train/instance_segmentation.md)
-- 4.[模型太大,想要更小的模型,试试模型裁剪吧!](tutorials/compress/classification.md)
+- 1.[目标检测模型训练](train/object_detection.md)
+- 2.[语义分割模型训练](train/semantic_segmentation.md)
+- 3.[实例分割模型训练](train/instance_segmentation.md)
+- 4.[模型太大,想要更小的模型,试试模型裁剪吧!](https://github.com/PaddlePaddle/PaddleX/tree/develop/tutorials/compress)

+ 4 - 4
docs/train/classification.md

@@ -10,10 +10,10 @@ PaddleX共提供了20+的图像分类模型,可满足开发者不同场景的
 
 | 模型(点击获取代码)               | Top1精度 | 模型大小 | GPU预测速度 | Arm预测速度 | 备注 |
 | :----------------  | :------- | :------- | :---------  | :---------  | :-----    |
-| [MobileNetV3_small_ssld](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/image_classification/mobilenetv3_small_ssld.py) |  71.3%  |  21.0MB  |  6.809ms   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
-| [MobileNetV2](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/image_classification/mobilenetv2.py)        | 72.2%  | 14.0MB   |  4.546ms  | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
-| [ShuffleNetV2](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/image_classification/shufflenetv2.py)     | 68.8%  | 9.0MB   | 6.101ms   | -  |  模型体积小,预测速度快,适用于低性能或移动端设备   |
-| [ResNet50_vd_ssld](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/image_classification/resnet50_vd_ssld.py)   |  82.4%   |   102.8MB    |  9.058ms       |   -    | 模型精度高,适用于服务端部署   |
+| [MobileNetV3_small_ssld](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/mobilenetv3_small_ssld.py) |  71.3%  |  21.0MB  |  6.809ms   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
+| [MobileNetV2](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/mobilenetv2.py)        | 72.2%  | 14.0MB   |  4.546ms  | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
+| [ShuffleNetV2](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/shufflenetv2.py)     | 68.8%  | 9.0MB   | 6.101ms   | -  |  模型体积小,预测速度快,适用于低性能或移动端设备   |
+| [ResNet50_vd_ssld](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/resnet50_vd_ssld.py)   |  82.4%   |   102.8MB    |  9.058ms       |   -    | 模型精度高,适用于服务端部署   |
 
 
 ## 开始训练

+ 3 - 3
docs/train/instance_segmentation.md

@@ -10,9 +10,9 @@ PaddleX目前提供了MaskRCNN实例分割模型结构,多种backbone模型,
 
 | 模型(点击获取代码)               | Box MMAP/Seg MMAP | 模型大小 | GPU预测速度 | Arm预测速度 | 备注 |
 | :----------------  | :------- | :------- | :---------  | :---------  | :-----    |
-| [MaskRCNN-ResNet50-FPN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py)   |  36.5%/32.2%   |   170.0MB    |  160.185ms       |   -    | 模型精度高,适用于服务端部署   |
-| [MaskRCNN-ResNet18-FPN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py)   |  -/-   |   120.0MB    |  -       |   -    | 模型精度高,适用于服务端部署   |
-| [MaskRCNN-HRNet-FPN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py)   |  -/-   |   116.MB    |  -       |   -    | 模型精度高,预测速度快,适用于服务端部署   |
+| [MaskRCNN-ResNet50-FPN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py)   |  36.5%/32.2%   |   170.0MB    |  160.185ms       |   -    | 模型精度高,适用于服务端部署   |
+| [MaskRCNN-ResNet18-FPN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py)   |  -/-   |   120.0MB    |  -       |   -    | 模型精度高,适用于服务端部署   |
+| [MaskRCNN-HRNet-FPN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py)   |  -/-   |   116.MB    |  -       |   -    | 模型精度高,预测速度快,适用于服务端部署   |
 
 
 ## 开始训练

+ 6 - 6
docs/train/object_detection.md

@@ -10,12 +10,12 @@ PaddleX目前提供了FasterRCNN和YOLOv3两种检测结构,多种backbone模型
 
 | 模型(点击获取代码)               | Box MMAP | 模型大小 | GPU预测速度 | Arm预测速度 | 备注 |
 | :----------------  | :------- | :------- | :---------  | :---------  | :-----    |
-| [YOLOv3-MobileNetV1](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/object_detection/yolov3_mobilenetv1.py) |  29.3%  |  99.2MB  |  15.442ms   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
-| [YOLOv3-MobileNetV3](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/object_detection/yolov3_mobilenetv3.py)        | 31.6%  | 100.7MB   |  143.322ms  | -  |  模型小,移动端上预测速度有优势   |
-| [YOLOv3-DarkNet53](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/object_detection/yolov3_darknet53.py)     | 38.9  | 249.2MB   | 42.672ms   | -  |  模型较大,预测速度快,适用于服务端   |
-| [FasterRCNN-ResNet50-FPN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/object_detection/faster_rcnn_r50_fpn.py)   |  37.2%   |   136.0MB    |  197.715ms       |   -    | 模型精度高,适用于服务端部署   |
-| [FasterRCNN-ResNet18-FPN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/object_detection/faster_rcnn_r18_fpn.py)   |  -   |   -    |  -       |   -    | 模型精度高,适用于服务端部署   |
-| [FasterRCNN-HRNet-FPN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py)   |  36.0%   |   115.MB    |  81.592ms       |   -    | 模型精度高,预测速度快,适用于服务端部署   |
+| [YOLOv3-MobileNetV1](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/yolov3_mobilenetv1.py) |  29.3%  |  99.2MB  |  15.442ms   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
+| [YOLOv3-MobileNetV3](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/yolov3_mobilenetv3.py)        | 31.6%  | 100.7MB   |  143.322ms  | -  |  模型小,移动端上预测速度有优势   |
+| [YOLOv3-DarkNet53](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/yolov3_darknet53.py)     | 38.9  | 249.2MB   | 42.672ms   | -  |  模型较大,预测速度快,适用于服务端   |
+| [FasterRCNN-ResNet50-FPN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/faster_rcnn_r50_fpn.py)   |  37.2%   |   136.0MB    |  197.715ms       |   -    | 模型精度高,适用于服务端部署   |
+| [FasterRCNN-ResNet18-FPN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/faster_rcnn_r18_fpn.py)   |  -   |   -    |  -       |   -    | 模型精度高,适用于服务端部署   |
+| [FasterRCNN-HRNet-FPN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py)   |  36.0%   |   115.MB    |  81.592ms       |   -    | 模型精度高,预测速度快,适用于服务端部署   |
 
 
 ## 开始训练

+ 6 - 6
docs/train/semantic_segmentation.md

@@ -10,12 +10,12 @@ PaddleX目前提供了DeepLabv3p、UNet、HRNet和FastSCNN四种语义分割结
 
 | 模型(点击获取代码)               | mIOU | 模型大小 | GPU预测速度 | Arm预测速度 | 备注 |
 | :----------------  | :------- | :------- | :---------  | :---------  | :-----    |
-| [DeepLabv3p-MobileNetV2-x0.25](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2_x0.25.py) |  -  |  2.9MB  |  -   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
-| [DeepLabv3p-MobileNetV2-x1.0](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py) |  69.8%  |  11MB  |  -   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
-| [DeepLabv3p-Xception65](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/semantic_segmentation/deeplabv3p_xception65.pyy)        | 79.3%  | 158MB   |  -  | -  |  模型大,精度高,适用于服务端   |
-| [UNet](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/semantic_segmentation/unet.py)     | -  | 52MB   | -   | -  |  模型较大,精度高,适用于服务端   |
-| [HRNet](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/semantic_segmentation/hrnet.py)   |  79.4%   |   37MB    |  -       |   -    | 模型较小,模型精度高,适用于服务端部署   |
-| [FastSCNN](https://github.com/PaddlePaddle/PaddleX/blob/doc/tutorials/train/semantic_segmentation/fast_scnn.py)   |  -   |   4.5MB    |  -       |   -    | 模型小,预测速度快,适用于低性能或移动端设备   |
+| [DeepLabv3p-MobileNetV2-x0.25](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2_x0.25.py) |  -  |  2.9MB  |  -   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
+| [DeepLabv3p-MobileNetV2-x1.0](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py) |  69.8%  |  11MB  |  -   | -  |  模型小,预测速度快,适用于低性能或移动端设备   |
+| [DeepLabv3p-Xception65](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/deeplabv3p_xception65.pyy)        | 79.3%  | 158MB   |  -  | -  |  模型大,精度高,适用于服务端   |
+| [UNet](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/unet.py)     | -  | 52MB   | -   | -  |  模型较大,精度高,适用于服务端   |
+| [HRNet](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/hrnet.py)   |  79.4%   |   37MB    |  -       |   -    | 模型较小,模型精度高,适用于服务端部署   |
+| [FastSCNN](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/fast_scnn.py)   |  -   |   4.5MB    |  -       |   -    | 模型小,预测速度快,适用于低性能或移动端设备   |
 
 
 ## 开始训练

+ 4 - 2
paddlex/__init__.py

@@ -13,6 +13,7 @@
 # limitations under the License.
 
 from __future__ import absolute_import
+
 import os
 if 'FLAGS_eager_delete_tensor_gb' not in os.environ:
     os.environ['FLAGS_eager_delete_tensor_gb'] = '0.0'
@@ -21,6 +22,7 @@ if 'FLAGS_allocator_strategy' not in os.environ:
 if "CUDA_VISIBLE_DEVICES" in os.environ:
     if os.environ["CUDA_VISIBLE_DEVICES"].count("-1") > 0:
         os.environ["CUDA_VISIBLE_DEVICES"] = ""
+
 from .utils.utils import get_environ_info
 from . import cv
 from . import det
@@ -38,7 +40,7 @@ except:
         "[WARNING] pycocotools is not installed, detection model is not available now."
     )
     print(
-        "[WARNING] pycocotools install: https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/install.md"
+        "[WARNING] pycocotools install: https://paddlex.readthedocs.io/zh_CN/develop/install.html#pycocotools"
     )
 
 import paddlehub as hub
@@ -54,4 +56,4 @@ log_level = 2
 
 from . import interpret
 
-__version__ = '1.0.7'
+__version__ = '1.0.8'

+ 11 - 1
tutorials/train/image_classification/alexnet.py

@@ -1,3 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
+import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.cls import transforms
 import paddlex as pdx
 
@@ -6,6 +11,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomCrop(crop_size=224), 
     transforms.RandomHorizontalFlip(),
@@ -18,6 +24,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -33,11 +40,14 @@ eval_dataset = pdx.datasets.ImageNet(
 # 初始化模型,并进行训练
 # 可使用VisualDL查看训练指标
 # VisualDL启动方式: visualdl --logdir output/mobilenetv2/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
+# 浏览器打开 https://0.0.0.0:8001或https://localhost:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 model = pdx.cls.AlexNet(num_classes=len(train_dataset.labels))
 # AlexNet需要指定确定的input_shape
 model.fixed_input_shape = [224, 224]
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 9 - 0
tutorials/train/image_classification/mobilenetv2.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.cls import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomCrop(crop_size=224), 
     transforms.RandomHorizontalFlip(),
@@ -19,6 +24,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -37,6 +43,9 @@ eval_dataset = pdx.datasets.ImageNet(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 model = pdx.cls.MobileNetV2(num_classes=len(train_dataset.labels))
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 9 - 0
tutorials/train/image_classification/mobilenetv3_small_ssld.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.cls import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomCrop(crop_size=224), 
     transforms.RandomHorizontalFlip(),
@@ -19,6 +24,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -37,6 +43,9 @@ eval_dataset = pdx.datasets.ImageNet(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 model = pdx.cls.MobileNetV3_small_ssld(num_classes=len(train_dataset.labels))
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 9 - 0
tutorials/train/image_classification/resnet50_vd_ssld.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.cls import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomCrop(crop_size=224), 
     transforms.RandomHorizontalFlip(),
@@ -19,6 +24,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -37,6 +43,9 @@ eval_dataset = pdx.datasets.ImageNet(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 model = pdx.cls.ResNet50_vd_ssld(num_classes=len(train_dataset.labels))
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 9 - 0
tutorials/train/image_classification/shufflenetv2.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.cls import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ veg_dataset = 'https://bj.bcebos.com/paddlex/datasets/vegetables_cls.tar.gz'
 pdx.utils.download_and_decompress(veg_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomCrop(crop_size=224), 
     transforms.RandomHorizontalFlip(),
@@ -19,6 +24,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
 train_dataset = pdx.datasets.ImageNet(
     data_dir='vegetables_cls',
     file_list='vegetables_cls/train_list.txt',
@@ -37,6 +43,9 @@ eval_dataset = pdx.datasets.ImageNet(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 model = pdx.cls.ShuffleNetV2(num_classes=len(train_dataset.labels))
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=10,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 from paddlex.det import transforms
@@ -10,6 +11,7 @@ xiaoduxiong_dataset = 'https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_de
 pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.Normalize(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-cocodetection
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='xiaoduxiong_ins_det/JPEGImages',
     ann_file='xiaoduxiong_ins_det/train.json',
@@ -41,7 +44,12 @@ eval_dataset = pdx.datasets.CocoDetection(
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 # num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
 num_classes = len(train_dataset.labels) + 1
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/instance_segmentation.html#maskrcnn
 model = pdx.det.MaskRCNN(num_classes=num_classes, backbone='HRNet_W18')
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/instance_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 from paddlex.det import transforms
@@ -10,6 +11,7 @@ xiaoduxiong_dataset = 'https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_de
 pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.Normalize(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-cocodetection
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='xiaoduxiong_ins_det/JPEGImages',
     ann_file='xiaoduxiong_ins_det/train.json',
@@ -41,7 +44,12 @@ eval_dataset = pdx.datasets.CocoDetection(
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 # num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
 num_classes = len(train_dataset.labels) + 1
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/instance_segmentation.html#maskrcnn
 model = pdx.det.MaskRCNN(num_classes=num_classes, backbone='ResNet18')
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/instance_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 from paddlex.det import transforms
@@ -10,6 +11,7 @@ xiaoduxiong_dataset = 'https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_de
 pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.Normalize(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-cocodetection
 train_dataset = pdx.datasets.CocoDetection(
     data_dir='xiaoduxiong_ins_det/JPEGImages',
     ann_file='xiaoduxiong_ins_det/train.json',
@@ -41,7 +44,12 @@ eval_dataset = pdx.datasets.CocoDetection(
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 # num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
 num_classes = len(train_dataset.labels) + 1
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/instance_segmentation.html#maskrcnn
 model = pdx.det.MaskRCNN(num_classes=num_classes, backbone='ResNet50')
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/instance_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 from paddlex.det import transforms
@@ -10,6 +11,7 @@ insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(insect_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.Normalize(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-vocdetection
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -43,7 +46,12 @@ eval_dataset = pdx.datasets.VOCDetection(
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 # num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
 num_classes = len(train_dataset.labels) + 1
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-fasterrcnn
 model = pdx.det.FasterRCNN(num_classes=num_classes, backbone='HRNet_W18')
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#id1
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 12 - 0
tutorials/train/object_detection/faster_rcnn_r18_fpn.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.det import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(insect_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.Normalize(),
@@ -19,7 +24,9 @@ eval_transforms = transforms.Compose([
     transforms.ResizeByShort(short_size=800, max_size=1333),
     transforms.Padding(coarsest_stride=32),
 ])
+
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-vocdetection
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -39,7 +46,12 @@ eval_dataset = pdx.datasets.VOCDetection(
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 # num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
 num_classes = len(train_dataset.labels) + 1
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-fasterrcnn
 model = pdx.det.FasterRCNN(num_classes=num_classes, backbone='ResNet18')
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#id1
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 12 - 0
tutorials/train/object_detection/faster_rcnn_r50_fpn.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.det import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(insect_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.Normalize(),
@@ -19,7 +24,9 @@ eval_transforms = transforms.Compose([
     transforms.ResizeByShort(short_size=800, max_size=1333),
     transforms.Padding(coarsest_stride=32),
 ])
+
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-vocdetection
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -39,7 +46,12 @@ eval_dataset = pdx.datasets.VOCDetection(
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 # num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
 num_classes = len(train_dataset.labels) + 1
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-fasterrcnn
 model = pdx.det.FasterRCNN(num_classes=num_classes, backbone='ResNet50')
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#id1
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=12,
     train_dataset=train_dataset,

+ 11 - 0
tutorials/train/object_detection/yolov3_darknet53.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.det import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(insect_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.MixupImage(mixup_epoch=250), 
     transforms.RandomDistort(),
@@ -23,6 +28,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-vocdetection
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -41,7 +47,12 @@ eval_dataset = pdx.datasets.VOCDetection(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-yolov3
 model = pdx.det.YOLOv3(num_classes=num_classes, backbone='DarkNet53')
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=270,
     train_dataset=train_dataset,

+ 11 - 0
tutorials/train/object_detection/yolov3_mobilenetv1.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.det import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(insect_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.MixupImage(mixup_epoch=250),
     transforms.RandomDistort(),
@@ -23,6 +28,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-vocdetection
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -41,7 +47,12 @@ eval_dataset = pdx.datasets.VOCDetection(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-yolov3
 model = pdx.det.YOLOv3(num_classes=num_classes, backbone='MobileNetV1')
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=270,
     train_dataset=train_dataset,

+ 11 - 0
tutorials/train/object_detection/yolov3_mobilenetv3.py

@@ -1,4 +1,8 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
 from paddlex.det import transforms
 import paddlex as pdx
 
@@ -7,6 +11,7 @@ insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
 pdx.utils.download_and_decompress(insect_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
 train_transforms = transforms.Compose([
     transforms.MixupImage(mixup_epoch=250), 
     transforms.RandomDistort(),
@@ -23,6 +28,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-vocdetection
 train_dataset = pdx.datasets.VOCDetection(
     data_dir='insect_det',
     file_list='insect_det/train_list.txt',
@@ -41,7 +47,12 @@ eval_dataset = pdx.datasets.VOCDetection(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-yolov3
 model = pdx.det.YOLOv3(num_classes=num_classes, backbone='MobileNetV3_large')
+
+# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=270,
     train_dataset=train_dataset,

+ 10 - 1
tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 import paddlex as pdx
@@ -10,6 +11,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.ResizeRangeScaling(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-segdataset
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -42,7 +45,13 @@ eval_dataset = pdx.datasets.SegDataset(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-deeplabv3p
 model = pdx.seg.DeepLabv3p(num_classes=num_classes, backbone='MobileNetV2_x1.0')
+
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=40,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2_x0.25.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 import paddlex as pdx
@@ -10,6 +11,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.ResizeRangeScaling(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-segdataset
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -42,7 +45,12 @@ eval_dataset = pdx.datasets.SegDataset(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-deeplabv3p
 model = pdx.seg.DeepLabv3p(num_classes=num_classes, backbone='MobileNetV2_x0.25')
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=40,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/semantic_segmentation/deeplabv3p_xception65.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 import paddlex as pdx
@@ -10,6 +11,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.ResizeRangeScaling(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-segdataset
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -42,7 +45,12 @@ eval_dataset = pdx.datasets.SegDataset(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-deeplabv3p
 model = pdx.seg.DeepLabv3p(num_classes=num_classes, backbone='Xception65')
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=40,
     train_dataset=train_dataset,

+ 9 - 4
tutorials/train/semantic_segmentation/fast_scnn.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 import paddlex as pdx
@@ -10,7 +11,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
-# API说明: https://paddlex.readthedocs.io/zh_CN/latest/apis/transforms/seg_transforms.html#composedsegtransforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.ResizeRangeScaling(),
@@ -25,7 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
-# API说明: https://paddlex.readthedocs.io/zh_CN/latest/apis/datasets/semantic_segmentation.html#segdataset
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-segdataset
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -44,9 +45,13 @@ eval_dataset = pdx.datasets.SegDataset(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 
-# https://paddlex.readthedocs.io/zh_CN/latest/apis/models/semantic_segmentation.html#fastscnn
 num_classes = len(train_dataset.labels)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-fastscnn
 model = pdx.seg.FastSCNN(num_classes=num_classes)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=20,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/semantic_segmentation/hrnet.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 import paddlex as pdx
@@ -10,6 +11,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.ResizeRangeScaling(),
@@ -24,6 +26,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-segdataset
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -42,7 +45,12 @@ eval_dataset = pdx.datasets.SegDataset(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-hrnet
 model = pdx.seg.HRNet(num_classes=num_classes)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=20,
     train_dataset=train_dataset,

+ 9 - 1
tutorials/train/semantic_segmentation/unet.py

@@ -1,5 +1,6 @@
+# 环境变量配置,用于控制是否使用GPU
+# 说明文档:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html#gpu
 import os
-# 选择使用0号卡
 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
 
 import paddlex as pdx
@@ -10,6 +11,7 @@ optic_dataset = 'https://bj.bcebos.com/paddlex/datasets/optic_disc_seg.tar.gz'
 pdx.utils.download_and_decompress(optic_dataset, path='./')
 
 # 定义训练和验证时的transforms
+# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
 train_transforms = transforms.Compose([
     transforms.RandomHorizontalFlip(), 
     transforms.ResizeRangeScaling(),
@@ -23,6 +25,7 @@ eval_transforms = transforms.Compose([
 ])
 
 # 定义训练和验证所用的数据集
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-segdataset
 train_dataset = pdx.datasets.SegDataset(
     data_dir='optic_disc_seg',
     file_list='optic_disc_seg/train_list.txt',
@@ -41,7 +44,12 @@ eval_dataset = pdx.datasets.SegDataset(
 # 浏览器打开 https://0.0.0.0:8001即可
 # 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
 num_classes = len(train_dataset.labels)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-deeplabv3p
 model = pdx.seg.UNet(num_classes=num_classes)
+
+# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
 model.train(
     num_epochs=20,
     train_dataset=train_dataset,