yzl19940819 4 жил өмнө
parent
commit
80d7ae8357
41 өөрчлөгдсөн 1033 нэмэгдсэн , 0 устгасан
  1. 8 0
      dygraph/examples/README.md
  2. 184 0
      dygraph/examples/defect_detection/README.md
  3. 41 0
      dygraph/examples/defect_detection/code/infer.py
  4. 52 0
      dygraph/examples/defect_detection/code/train.py
  5. BIN
      dygraph/examples/defect_detection/images/labelme.png
  6. BIN
      dygraph/examples/defect_detection/images/lens.png
  7. BIN
      dygraph/examples/defect_detection/images/predict.jpg
  8. BIN
      dygraph/examples/defect_detection/images/process.png
  9. BIN
      dygraph/examples/defect_detection/images/robot.png
  10. BIN
      dygraph/examples/defect_detection/images/split_dataset.png
  11. BIN
      dygraph/examples/defect_detection/images/vdl.png
  12. BIN
      dygraph/examples/defect_detection/images/vdl2.png
  13. 288 0
      dygraph/examples/rebar_count/README.md
  14. 42 0
      dygraph/examples/rebar_count/code/infer.py
  15. 65 0
      dygraph/examples/rebar_count/code/prune.py
  16. 56 0
      dygraph/examples/rebar_count/code/train.py
  17. BIN
      dygraph/examples/rebar_count/images/0.png
  18. BIN
      dygraph/examples/rebar_count/images/2.png
  19. BIN
      dygraph/examples/rebar_count/images/3.png
  20. BIN
      dygraph/examples/rebar_count/images/5.png
  21. BIN
      dygraph/examples/rebar_count/images/7.png
  22. BIN
      dygraph/examples/rebar_count/images/8.png
  23. BIN
      dygraph/examples/rebar_count/images/phone_pic.jpg
  24. BIN
      dygraph/examples/rebar_count/images/predict.jpg
  25. BIN
      dygraph/examples/rebar_count/images/process.png
  26. BIN
      dygraph/examples/rebar_count/images/split_dataset.png
  27. BIN
      dygraph/examples/rebar_count/images/vdl.png
  28. BIN
      dygraph/examples/rebar_count/images/vdl2.png
  29. BIN
      dygraph/examples/rebar_count/images/worker.png
  30. 207 0
      dygraph/examples/robot_grab/README.md
  31. 40 0
      dygraph/examples/robot_grab/code/infer.py
  32. 50 0
      dygraph/examples/robot_grab/code/train.py
  33. BIN
      dygraph/examples/robot_grab/images/labelme.png
  34. BIN
      dygraph/examples/robot_grab/images/lens.png
  35. BIN
      dygraph/examples/robot_grab/images/predict.bmp
  36. BIN
      dygraph/examples/robot_grab/images/predict.jpg
  37. BIN
      dygraph/examples/robot_grab/images/process.png
  38. BIN
      dygraph/examples/robot_grab/images/robot.png
  39. BIN
      dygraph/examples/robot_grab/images/split_dataset.png
  40. BIN
      dygraph/examples/robot_grab/images/vdl.png
  41. BIN
      dygraph/examples/robot_grab/images/vdl2.png

+ 8 - 0
dygraph/examples/README.md

@@ -0,0 +1,8 @@
+# 说明
+本目录提供了多个产业实际的应用案例,用户可根据案例文档说明快速掌握如何应用PaddleX应用到实际的项目开发当中。
+
+* [钢筋计数](./rebar_count)
+
+* [机械手抓取](./robat_grab)
+
+* [缺陷检测](./defect_detection)

+ 184 - 0
dygraph/examples/defect_detection/README.md

@@ -0,0 +1,184 @@
+# 镜头缺陷检测
+### 1 项目说明
+摄像头模组是智能手机最为重要的组成部分之一。随着智能手机行业的快速发展,摄像头模组的需求量增加。高像素摄像头的出现,对模组检测精度要求提出了新的挑战。
+
+项目中以手机镜头为例,向大家介绍如何快速使用实例分割方式进行缺陷检测。
+
+
+
+### 2 数据准备
+数据集中包含了992张已经标注好的数据,标注形式为MSCOCO的实例分割格式。[点击此处下载数据集](https://bj.bcebos.com/paddlex/examples2/defect_detection/dataset_lens_defect_detection.zip)
+
+<div align="center">
+<img src="./images/lens.png"  width = "1000" /
+>              </div>
+
+更多数据格式信息请参考[数据标注说明文档](https://paddlex.readthedocs.io/zh_CN/develop/data/annotation/index.html)
+* **数据切分**
+将训练集、验证集和测试集按照7:2:1的比例划分。
+``` shell
+paddlex --split_dataset --format COCO --dataset_dir dataset --val_value 0.2 --test_value 0.1
+```
+<div align="center">
+<img src="./images/split_dataset.png"  width = "1500" />              </div>
+数据文件夹切分前后的状态如下:
+
+```bash
+  dataset/                      dataset/
+  ├── JPEGImages/       -->     ├── JPEGImages/
+  ├── annotations.json          ├── annotations.json
+                                ├── test.json
+                                ├── train.json
+                                ├── val.json
+  ```
+
+
+### 3 模型选择
+PaddleX提供了丰富的视觉模型,在实例分割中提供了MaskRCNN系列模型.在本项目中采用Mask-RCNN算法
+
+### 4 模型训练
+在项目中,我们采用Mask-RCNN作为镜头缺陷检测的模型。具体代码请参考[train.py](./code/train.py)
+运行如下代码开始训练模型:
+``` shell
+python code/train.py
+```
+若输入如下代码,则可在log文件中查看训练日志
+``` shell
+python code/train.py > log
+```
+* 训练过程说明
+<div align="center">
+<img src="./images/process.png"  width = "1000" />              </div>
+
+``` bash
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.RandomResizeByShort(
+        short_sizes=[640, 672, 704, 736, 768, 800],
+        max_size=1333,
+        interp='CUBIC'), T.RandomHorizontalFlip(), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.ResizeByShort(
+        short_size=800, max_size=1333, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+```
+
+```bash
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+train_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/train.json',
+    # num_workers=0, # 注意:若运行时报错则添加该句
+    transforms=train_transforms,
+    shuffle=True)
+eval_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/val.json',
+    # num_workers=0, # 注意:若运行时报错则添加该句
+    transforms=eval_transforms)
+```
+``` bash
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.models.MaskRCNN(
+    num_classes=num_classes, backbone='ResNet50', with_fpn=True)
+```
+``` bash
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L155
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=12,
+    train_dataset=train_dataset,
+    train_batch_size=1,
+    eval_dataset=eval_dataset,
+    learning_rate=0.00125,
+    lr_decay_epochs=[8, 11],
+    warmup_steps=10,
+    warmup_start_lr=0.0,
+    save_dir='output/mask_rcnn_r50_fpn',
+    use_vdl=True)
+ ```
+
+### 5 训练可视化
+
+在模型训练过程,在`train`函数中,将`use_vdl`设为True,则训练过程会自动将训练日志以VisualDL的格式打点在`save_dir`(用户自己指定的路径)下的`vdl_log`目录。
+
+用户可以使用如下命令启动VisualDL服务,查看可视化指标
+
+```
+visualdl --logdir output/mask_rcnn_r50_fpn/vdl_log --port 8001
+```
+
+<div align="center">
+<img src="./images/vdl.png"  width = "1000" />              </div>
+
+服务启动后,按照命令行提示,使用浏览器打开 http://localhost:8001/
+
+### 6 模型导出
+模型训练处理被保存在了output文件夹,此时模型文件还是动态图文档,需要导出成静态图的模型才可以进一步部署预测,运行如下命令,会自动在output文件夹下创建一个`inference_model`的文件夹,用来存放预测好的模型。
+
+``` bash
+paddlex --export_inference --model_dir=output/mask_rcnn_r50_fpn/best_model --save_dir=output/inference_model
+```
+### 7 模型预测
+
+运行如下代码:
+``` bash
+python code/infer.py
+```
+文件内容如下:
+``` bash
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+image_name = 'dataset/JPEGImages/Image_370.jpg'
+model = pdx.load_model('output/mask_rcnn_r50_fpn/best_model')
+
+
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+keep_results = []
+areas = []
+f = open('result.txt','a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count+=1
+    f.write(str(dt)+'\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :"+str(int(count)))
+f.close()
+
+pdx.det.visualize(image_name, result, threshold=0.5, save_dir='./output/mask_rcnn_r50_fpn')
+```
+则可生成result.txt文件并显示预测结果图片,result.txt文件中会显示图片中每个检测框的位置、类别及置信度,并给出检测框的总个数.
+
+预测结果如下:
+<div align="center">
+<img src="./images/predict.jpg"  width = "1000" />              </div>

+ 41 - 0
dygraph/examples/defect_detection/code/infer.py

@@ -0,0 +1,41 @@
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+image_name = 'dataset/JPEGImages/Image_370.jpg'
+model = pdx.load_model('output/mask_rcnn_r50_fpn/best_model')
+
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+keep_results = []
+areas = []
+f = open('result.txt', 'a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count += 1
+    f.write(str(dt) + '\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :" + str(int(count)))
+f.close()
+
+pdx.det.visualize(
+    image_name, result, threshold=0.5, save_dir='./output/mask_rcnn_r50_fpn')

+ 52 - 0
dygraph/examples/defect_detection/code/train.py

@@ -0,0 +1,52 @@
+import paddlex as pdx
+from paddlex import transforms as T
+
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.RandomResizeByShort(
+        short_sizes=[640, 672, 704, 736, 768, 800],
+        max_size=1333,
+        interp='CUBIC'), T.RandomHorizontalFlip(), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.ResizeByShort(
+        short_size=800, max_size=1333, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+train_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/train.json',
+    transforms=train_transforms,
+    shuffle=True,
+    num_workers=0)
+eval_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/val.json',
+    transforms=eval_transforms,
+    num_workers=0)
+
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.models.MaskRCNN(
+    num_classes=num_classes, backbone='ResNet50', with_fpn=True)
+
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L155
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=12,
+    train_dataset=train_dataset,
+    train_batch_size=1,
+    eval_dataset=eval_dataset,
+    learning_rate=0.00125,
+    lr_decay_epochs=[8, 11],
+    warmup_steps=10,
+    warmup_start_lr=0.0,
+    save_dir='output/mask_rcnn_r50_fpn',
+    use_vdl=True)

BIN
dygraph/examples/defect_detection/images/labelme.png


BIN
dygraph/examples/defect_detection/images/lens.png


BIN
dygraph/examples/defect_detection/images/predict.jpg


BIN
dygraph/examples/defect_detection/images/process.png


BIN
dygraph/examples/defect_detection/images/robot.png


BIN
dygraph/examples/defect_detection/images/split_dataset.png


BIN
dygraph/examples/defect_detection/images/vdl.png


BIN
dygraph/examples/defect_detection/images/vdl2.png


+ 288 - 0
dygraph/examples/rebar_count/README.md

@@ -0,0 +1,288 @@
+# 钢筋计数
+
+
+### 1 项目说明
+在该项目中,主要向大家介绍如何使用目标检测来实现对钢筋计数。涉及代码亦可用于车辆计数、螺母计数、圆木计数等。
+
+在工地现场,对于进场的钢筋车,验收人员需要对车上的钢筋进行现场人工点根,确认数量后钢筋车才能完成进场卸货。上述过程繁琐、消耗人力且速度很慢。针对上述问题,希望通过手机拍照->目标检测计数->人工修改少量误检的方式智能、高效的完成此任务:
+<div align="center">
+<img src="./images/worker.png"  width = "500" />              </div>
+
+**业务难点:**
+* **精度要求高** 钢筋本身价格较昂贵,且在实际使用中数量很大,误检和漏检都需要人工在大量的标记点中找出,所以需要精度非常高才能保证验收人员的使用体验。需要专门针对此密集目标的检测算法进行优化,另外,还需要处理拍摄角度、光线不完全受控,钢筋存在长短不齐、可能存在遮挡等情况。
+* **钢筋尺寸不一** 钢筋的直径变化范围较大且截面形状不规则、颜色不一,拍摄的角度、距离也不完全受控,这也导致传统算法在实际使用的过程中效果很难稳定。
+* **边界难以区分** 一辆钢筋车一次会运输很多捆钢筋,如果直接全部处理会存在边缘角度差、遮挡等问题效果不好,目前在用单捆处理+最后合计的流程,这样的处理过程就会需要对捆间进行分割或者对最终结果进行去重,难度较大。
+
+<div align="center">
+<img src="./images/phone_pic.jpg"  width = "1000" />              </div>
+
+### 2 数据准备
+
+数据集中包含了250张已经标注好的数据,原始数据标注形式为csv格式。该项目采用目标检测的标注方式,在本文档中提供了VOC数据集格式。[点击此处下载数据集]( https://bj.bcebos.com/paddlex/examples2/rebar_count/dataset_reinforcing_steel_bar_counting.zip)
+
+更多数据格式信息请参考[数据标注说明文档](https://paddlex.readthedocs.io/zh_CN/develop/data/annotation/index.html)
+
+* **数据切分**
+将训练集、验证集和测试集按照7:2:1的比例划分。 PaddleX中提供了简单易用的API,方便用户直接使用进行数据划分。
+``` shell
+paddlex --split_dataset --format VOC --dataset_dir dataset --val_value 0.2 --test_value 0.1
+```
+<div align="center">
+<img src="./images/split_dataset.png"  width = "1500" />              </div>
+数据文件夹切分前后的状态如下:
+
+```bash
+  dataset/                          dataset/
+  ├── Annotations/      -->         ├── Annotations/
+  ├── JPEGImages/                   ├── JPEGImages/
+                                    ├── labels.txt
+                                    ├── test_list.txt
+                                    ├── train_list.txt
+                                    ├── val_list.txt
+  ```
+
+### 3 模型选择
+PaddleX提供了丰富的视觉模型,在目标检测中提供了RCNN和YOLO系列模型。在本项目中采用yoloV3作为检测模型进行钢筋计数。
+
+### 4 模型训练
+在项目中,我们采用yolov3作为钢筋检测的模型。具体代码请参考[train.py](./code/train.py)。
+
+运行如下代码开始训练模型:
+
+
+``` shell
+python code/train.py
+```
+
+若输入如下代码,则可在log文件中查看训练日志,log文件保存在`code`目标下
+``` shell
+python code/train.py > log
+```
+
+* 训练过程说明
+<div align="center">
+<img src="./images/process.png"  width = "1000" />              </div>
+
+``` bash
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.MixupImage(mixup_epoch=250), T.RandomDistort(),
+    T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
+    T.RandomHorizontalFlip(), T.BatchRandomResize(
+        target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],
+        interp='RANDOM'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.Resize(
+        608, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+```
+
+```bash
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/datasets/voc.py#L29
+train_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/train_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=train_transforms,
+    shuffle=True,
+    num_worker=0)
+
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/val_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=eval_transforms,
+    shuffle=False,
+    num_worker=0)
+```
+``` bash
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.models.YOLOv3(num_classes=num_classes, backbone='DarkNet53')
+```
+``` bash
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L155
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=270,
+    train_dataset=train_dataset,
+    train_batch_size=2,
+    eval_dataset=eval_dataset,
+    learning_rate=0.001 / 8,
+    warmup_steps=1000,
+    warmup_start_lr=0.0,
+    save_interval_epochs=5,
+    lr_decay_epochs=[216, 243],
+    save_dir='output/yolov3_darknet53')
+ ```
+
+
+
+### 5 训练可视化
+
+在模型训练过程,在`train`函数中,将`use_vdl`设为True,则训练过程会自动将训练日志以VisualDL的格式打点在`save_dir`(用户自己指定的路径)下的`vdl_log`目录。
+
+用户可以使用如下命令启动VisualDL服务,查看可视化指标
+
+```
+visualdl --logdir output/yolov3_darknet53/vdl_log --port 8001
+```
+
+<div align="center">
+<img src="./images/vdl.png"  width = "1000" />              </div>
+
+服务启动后,按照命令行提示,使用浏览器打开 http://localhost:8001/
+<div align="center">
+<img src="./images/vdl2.png"  width = "1000" />              </div>
+
+### 6 模型导出
+模型训练后保存在output文件夹,如果要使用PaddleInference进行部署需要导出成静态图的模型,运行如下命令,会自动在output文件夹下创建一个`inference_model`的文件夹,用来存放导出后的模型。
+
+``` bash
+paddlex --export_inference --model_dir=output/yolov3_darknet53/best_model --save_dir=output/inference_model --fixed_input_shape=608,608
+```
+**注意**:设定 fixed_input_shape 的数值需与 eval_transforms 中设置的 target_size 数值上保持一致。
+### 7 模型预测
+
+运行如下代码:
+``` bash
+python code/infer.py
+```
+文件内容如下:
+```bash
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+image_name = 'dataset/JPEGImages/6B898244.jpg'
+
+model = pdx.load_model('output/yolov3_darknet53/best_model')
+
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+keep_results = []
+areas = []
+f = open('result.txt','a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count+=1
+    f.write(str(dt)+'\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :"+str(int(count)))
+f.close()
+pdx.det.visualize(image_name, result, threshold=0.5, save_dir='./output/yolov3_darknet53')
+```
+
+则可生成result.txt文件并显示预测结果图片,result.txt文件中会显示图片中每个检测框的位置、类别及置信度,并给出检测框的总个数,从而实现了钢筋自动计数。
+
+预测结果如下:
+<div align="center">
+<img src="./images/predict.jpg"  width = "1000" />              </div>
+
+### 8 模型裁剪
+
+模型裁剪可以更好地满足在端侧、移动端上部署场景下的性能需求,可以有效得降低模型的体积,以及计算量,加速预测性能。PaddleX集成了PaddleSlim的基于敏感度的通道裁剪算法,用户可以在PaddleX的训练代码里轻松使用起来。
+
+运行如下代码:
+``` bash
+python code/prune.py
+```
+裁剪过程说明:
+``` bash
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.MixupImage(mixup_epoch=250), T.RandomDistort(),
+    T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
+    T.RandomHorizontalFlip(), T.BatchRandomResize(
+        target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],
+        interp='RANDOM'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.Resize(
+        608, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+```
+``` bash
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/datasets/voc.py#L29
+train_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/train_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=train_transforms,
+    shuffle=True)
+
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/val_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=eval_transforms,
+    shuffle=False)
+```
+``` bash
+# 加载模型
+model = pdx.load_model('output/yolov3_darknet53/best_model')
+```
+``` bash
+# Step 1/3: 分析模型各层参数在不同的剪裁比例下的敏感度
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/95c53dec89ab0f3769330fa445c6d9213986ca5f/paddlex/cv/models/base.py#L352
+model.analyze_sensitivity(
+    dataset=eval_dataset,
+    batch_size=1,
+    save_dir='output/yolov3_darknet53/prune')
+```
+**注意**: 如果之前运行过该步骤,第二次运行时会自动加载已有的output/yolov3_darknet53/prune/model.sensi.data,不再进行敏感度分析。
+``` bash
+# Step 2/3: 根据选择的FLOPs减小比例对模型进行剪裁
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/95c53dec89ab0f3769330fa445c6d9213986ca5f/paddlex/cv/models/base.py#L394
+model.prune(pruned_flops=.2)
+```
+**注意**: 如果想直接保存剪裁完的模型参数,设置save_dir即可。但我们强烈建议对剪裁过的模型重新进行训练,以保证模型精度损失能尽可能少。
+``` bash
+# Step 3/3: 对剪裁后的模型重新训练
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L154
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=270,
+    train_dataset=train_dataset,
+    train_batch_size=8,
+    eval_dataset=eval_dataset,
+    learning_rate=0.001 / 8,
+    warmup_steps=1000,
+    warmup_start_lr=0.0,
+    save_interval_epochs=5,
+    lr_decay_epochs=[216, 243],
+    pretrain_weights=None,
+    save_dir='output/yolov3_darknet53/prune')
+```
+**注意**: 重新训练时需将pretrain_weights设置为None,否则模型会加载pretrain_weights指定的预训练模型参数。

+ 42 - 0
dygraph/examples/rebar_count/code/infer.py

@@ -0,0 +1,42 @@
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+image_name = 'dataset/JPEGImages/6B898244.jpg'
+
+model = pdx.load_model('output/ppyolo_r50vd_dcn/best_model')
+
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+keep_results = []
+areas = []
+f = open('result.txt', 'a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count += 1
+    f.write(str(dt) + '\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :" + str(int(count)))
+f.close()
+
+pdx.det.visualize(
+    image_name, result, threshold=0.5, save_dir='./output/ppyolo_r50vd_dcn')

+ 65 - 0
dygraph/examples/rebar_count/code/prune.py

@@ -0,0 +1,65 @@
+import paddlex as pdx
+from paddlex import transforms as T
+
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.MixupImage(mixup_epoch=250), T.RandomDistort(),
+    T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
+    T.RandomHorizontalFlip(), T.BatchRandomResize(
+        target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],
+        interp='RANDOM'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.Resize(
+        608, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/datasets/voc.py#L29
+train_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/train_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=train_transforms,
+    shuffle=True)
+
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/val_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=eval_transforms,
+    shuffle=False)
+
+# 加载模型
+model = pdx.load_model('output/yolov3_darknet53/best_model')
+
+# Step 1/3: 分析模型各层参数在不同的剪裁比例下的敏感度
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/95c53dec89ab0f3769330fa445c6d9213986ca5f/paddlex/cv/models/base.py#L352
+model.analyze_sensitivity(
+    dataset=eval_dataset,
+    batch_size=1,
+    save_dir='output/yolov3_darknet53/prune')
+
+# Step 2/3: 根据选择的FLOPs减小比例对模型进行剪裁
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/95c53dec89ab0f3769330fa445c6d9213986ca5f/paddlex/cv/models/base.py#L394
+model.prune(pruned_flops=.2)
+
+# Step 3/3: 对剪裁后的模型重新训练
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L154
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=270,
+    train_dataset=train_dataset,
+    train_batch_size=8,
+    eval_dataset=eval_dataset,
+    learning_rate=0.001 / 8,
+    warmup_steps=1000,
+    warmup_start_lr=0.0,
+    save_interval_epochs=5,
+    lr_decay_epochs=[216, 243],
+    pretrain_weights=None,
+    save_dir='output/yolov3_darknet53/prune')

+ 56 - 0
dygraph/examples/rebar_count/code/train.py

@@ -0,0 +1,56 @@
+import paddlex as pdx
+from paddlex import transforms as T
+
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.MixupImage(mixup_epoch=250), T.RandomDistort(),
+    T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
+    T.RandomHorizontalFlip(), T.BatchRandomResize(
+        target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],
+        interp='RANDOM'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.Resize(
+        608, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/datasets/voc.py#L29
+train_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/train_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=train_transforms,
+    shuffle=True,
+    num_worker=0)
+
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='dataset',
+    file_list='dataset/val_list.txt',
+    label_list='dataset/labels.txt',
+    transforms=eval_transforms,
+    shuffle=False,
+    num_worker=0)
+
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.models.YOLOv3(num_classes=num_classes, backbone='DarkNet53')
+
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L155
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=270,
+    train_dataset=train_dataset,
+    train_batch_size=2,
+    eval_dataset=eval_dataset,
+    learning_rate=0.001 / 8,
+    warmup_steps=1000,
+    warmup_start_lr=0.0,
+    save_interval_epochs=5,
+    lr_decay_epochs=[216, 243],
+    save_dir='output/yolov3_darknet53')

BIN
dygraph/examples/rebar_count/images/0.png


BIN
dygraph/examples/rebar_count/images/2.png


BIN
dygraph/examples/rebar_count/images/3.png


BIN
dygraph/examples/rebar_count/images/5.png


BIN
dygraph/examples/rebar_count/images/7.png


BIN
dygraph/examples/rebar_count/images/8.png


BIN
dygraph/examples/rebar_count/images/phone_pic.jpg


BIN
dygraph/examples/rebar_count/images/predict.jpg


BIN
dygraph/examples/rebar_count/images/process.png


BIN
dygraph/examples/rebar_count/images/split_dataset.png


BIN
dygraph/examples/rebar_count/images/vdl.png


BIN
dygraph/examples/rebar_count/images/vdl2.png


BIN
dygraph/examples/rebar_count/images/worker.png


+ 207 - 0
dygraph/examples/robot_grab/README.md

@@ -0,0 +1,207 @@
+# 机械手抓取
+### 1 项目说明
+在生产过程中为了节省人工、提高加工效率,经常会用到自动化的抓取装置或机械手来代替人工操作。抓取的准确性很大程度上取决于视觉系统的识别准确性。
+
+在2D视觉抓取中,能否获得目标物体精准的轮廓边缘将直接决定是否可以对目标进行准确的抓取。在本项目中,通过实例分割的方式对箱体中的目标进行边缘轮廓分割,引导机械手实现准确的抓取。
+
+<div align="center">
+<img src="./images/robot.png"  width = "500" /
+>              </div>
+
+### 2 数据准备
+数据集中提供了采用labelme进行多边形标注的30张图片。[点击此处下载数据集](https://bj.bcebos.com/paddlex/examples2/robot_grab/dataset_manipulator_grab.zip)
+
+* **准备工作**
+
+先指定路径到项目文件夹
+``` shell
+cd path_to_paddlexproject
+```
+建立`dataset_labelme`文件夹,在该文件夹下再分别建立`JPEGImages`和`Annotations`文件夹,将图片存放于`JPEGImages`文件夹,`Annotations`文件夹用于存储标注的json文件。
+
+打开LabelMe,点击”Open Dir“按钮,选择需要标注的图像所在的文件夹打开,则”File List“对话框中会显示所有图像所对应的绝对路径,接着便可以开始遍历每张图像,进行标注工作.
+
+更多数据格式信息请参考[数据标注说明文档](https://paddlex.readthedocs.io/zh_CN/develop/data/annotation/index.html)
+* **目标边缘标注**
+
+打开多边形标注工具(右键菜单->Create Polygon)以打点的方式圈出目标的轮廓,并在弹出的对话框中写明对应label(当label已存在时点击即可,此处请注意label勿使用中文),具体如下提所示,当框标注错误时,可点击左侧的“Edit Polygons”再点击标注框,通过拖拉进行修改,也可再点击“Delete Polygon”进行删除。
+
+点击”Save“,将标注结果保存到中创建的文件夹`Annotations`目录中。
+<div align="center">
+<img src="./images/labelme.png"  width = "1000" /
+>              </div>
+
+* **格式转换**
+
+LabelMe标注后的数据还需要进行转换为MSCOCO格式,才可以用于实例分割任务的训练,创建保存目录`dataset`,在python环境中安装paddlex后,使用如下命令即可:
+``` shell
+paddlex --data_conversion --source labelme --to COCO --pics dataset_labelme/JPEGImages --annotations dataset_labelme/Annotations --save_dir dataset
+```
+
+* **数据切分**
+将训练集、验证集和测试集按照7:2:1的比例划分。
+``` shell
+paddlex --split_dataset --format COCO --dataset_dir dataset --val_value 0.2 --test_value 0.1
+```
+<div align="center">
+<img src="./images/split_dataset.png"  width = "1500" />              </div>
+数据文件夹切分前后的状态如下:
+
+```bash
+  dataset/                      dataset/
+  ├── JPEGImages/       -->     ├── JPEGImages/
+  ├── annotations.json          ├── annotations.json
+                                ├── test.json
+                                ├── train.json
+                                ├── val.json
+  ```
+
+
+### 4 模型选择
+PaddleX提供了丰富的视觉模型,在实例分割中提供了MaskRCNN系列模型,方便用户根据实际需要做选择。
+
+### 5 模型训练
+在项目中,我们采用MaskRCNN作为木块抓取的模型。具体代码请参考[train.py](./code/train.py)
+运行如下代码开始训练模型:
+``` shell
+python code/train.py
+```
+若输入如下代码,则可在log文件中查看训练日志
+``` shell
+python code/train.py > log
+```
+* 训练过程说明
+<div align="center">
+<img src="./images/process.png"  width = "1000" />              </div>
+
+``` bash
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.RandomResizeByShort(
+        short_sizes=[640, 672, 704, 736, 768, 800],
+        max_size=1333,
+        interp='CUBIC'), T.RandomHorizontalFlip(), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.ResizeByShort(
+        short_size=800, max_size=1333, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+```
+
+```bash
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+train_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/train.json',
+    transforms=train_transforms,
+    shuffle=True)
+eval_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/val.json',
+    transforms=eval_transforms)
+```
+``` bash
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.models.MaskRCNN(
+    num_classes=num_classes, backbone='ResNet50', with_fpn=True)
+```
+``` bash
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L155
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=12,
+    train_dataset=train_dataset,
+    train_batch_size=1,
+    eval_dataset=eval_dataset,
+    learning_rate=0.00125,
+    lr_decay_epochs=[8, 11],
+    warmup_steps=10,
+    warmup_start_lr=0.0,
+    save_dir='output/mask_rcnn_r50_fpn',
+    use_vdl=True)
+ ```
+
+### 6 训练可视化
+
+在模型训练过程,在`train`函数中,将`use_vdl`设为True,则训练过程会自动将训练日志以VisualDL的格式打点在`save_dir`(用户自己指定的路径)下的`vdl_log`目录。
+
+用户可以使用如下命令启动VisualDL服务,查看可视化指标
+
+```
+visualdl --logdir output/mask_rcnn_r50_fpn/vdl_log --port 8001
+```
+
+<div align="center">
+<img src="./images/vdl.png"  width = "1000" />              </div>
+
+服务启动后,按照命令行提示,使用浏览器打开 http://localhost:8001/
+
+### 7 模型导出
+模型训练后保存在output文件夹,如果要使用PaddleInference进行部署需要导出成静态图的模型,运行如下命令,会自动在output文件夹下创建一个`inference_model`的文件夹,用来存放导出后的模型。
+
+``` bash
+paddlex --export_inference --model_dir=output/mask_rcnn_r50_fpn/best_model --save_dir=output/inference_model
+```
+
+### 8 模型预测
+
+运行如下代码:
+``` bash
+python code/infer.py
+```
+文件内容如下:
+``` bash
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+image_name = 'dataset/JPEGImages/Image_20210615204210757.bmp'
+model = pdx.load_model('output/mask_rcnn_r50_fpn/best_model')
+
+
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+keep_results = []
+areas = []
+f = open('result.txt','a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count+=1
+    f.write(str(dt)+'\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :"+str(int(count)))
+f.close()
+
+pdx.det.visualize(image_name, result, threshold=0.5, save_dir='./output/mask_rcnn_r50_fpn')
+```
+在目录中会生成result.txt文件和预测结果图片,result.txt文件中会显示图片中每个检测框的位置、类别及置信度,并给出检测框的总个数。
+
+预测结果如下:
+<div align="center">
+<img src="./images/predict.bmp"  width = "1000" />              </div>

+ 40 - 0
dygraph/examples/robot_grab/code/infer.py

@@ -0,0 +1,40 @@
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+image_name = 'dataset/JPEGImages/Image_20210615204210757.bmp'
+model = pdx.load_model('output/mask_rcnn_r50_fpn/best_model')
+
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+keep_results = []
+areas = []
+f = open('result.txt', 'a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count += 1
+    f.write(str(dt) + '\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :" + str(int(count)))
+f.close()
+pdx.det.visualize(
+    image_name, result, threshold=0.5, save_dir='./output/mask_rcnn_r50_fpn')

+ 50 - 0
dygraph/examples/robot_grab/code/train.py

@@ -0,0 +1,50 @@
+import paddlex as pdx
+from paddlex import transforms as T
+
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.RandomResizeByShort(
+        short_sizes=[640, 672, 704, 736, 768, 800],
+        max_size=1333,
+        interp='CUBIC'), T.RandomHorizontalFlip(), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.ResizeByShort(
+        short_size=800, max_size=1333, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/dygraph/paddlex/cv/datasets/coco.py#L26
+train_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/train.json',
+    transforms=train_transforms,
+    shuffle=True)
+eval_dataset = pdx.datasets.CocoDetection(
+    data_dir='dataset/JPEGImages',
+    ann_file='dataset/val.json',
+    transforms=eval_transforms)
+
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.models.MaskRCNN(
+    num_classes=num_classes, backbone='ResNet50', with_fpn=True)
+
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0-rc/paddlex/cv/models/detector.py#L155
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=12,
+    train_dataset=train_dataset,
+    train_batch_size=1,
+    eval_dataset=eval_dataset,
+    learning_rate=0.00125,
+    lr_decay_epochs=[8, 11],
+    warmup_steps=10,
+    warmup_start_lr=0.0,
+    save_dir='output/mask_rcnn_r50_fpn',
+    use_vdl=True)

BIN
dygraph/examples/robot_grab/images/labelme.png


BIN
dygraph/examples/robot_grab/images/lens.png


BIN
dygraph/examples/robot_grab/images/predict.bmp


BIN
dygraph/examples/robot_grab/images/predict.jpg


BIN
dygraph/examples/robot_grab/images/process.png


BIN
dygraph/examples/robot_grab/images/robot.png


BIN
dygraph/examples/robot_grab/images/split_dataset.png


BIN
dygraph/examples/robot_grab/images/vdl.png


BIN
dygraph/examples/robot_grab/images/vdl2.png