浏览代码

fix conflicts

FlyingQianMM 4 年之前
父节点
当前提交
bd1a172fb4
共有 100 个文件被更改,包括 1390 次插入1731 次删除
  1. 76 87
      README.md
  2. 14 0
      deploy/README.md
  3. 11 37
      deploy/cpp/README.md
  4. 9 0
      deploy/cpp/docs/apis/model.md
  5. 1 1
      deploy/cpp/docs/compile/openvino/README.md
  6. 0 1
      deploy/cpp/docs/compile/openvino/openvino_windows.md
  7. 4 4
      deploy/cpp/docs/compile/paddle/linux.md
  8. 0 0
      deploy/cpp/docs/csharp_deploy/c#/Form1.Designer.cs
  9. 215 274
      deploy/cpp/docs/csharp_deploy/c#/Form1.cs
  10. 0 0
      deploy/cpp/docs/csharp_deploy/c#/Form1.resx
  11. 0 0
      deploy/cpp/docs/csharp_deploy/c#/Program.cs
  12. 0 0
      deploy/cpp/docs/csharp_deploy/c#/WinFormsApp_final.csproj
  13. 0 0
      deploy/cpp/docs/csharp_deploy/c#/WinFormsApp_final.csproj.user
  14. 0 0
      deploy/cpp/docs/csharp_deploy/c#/WinFormsApp_final.sln
  15. 0 0
      deploy/cpp/docs/csharp_deploy/images/1.png
  16. 0 0
      deploy/cpp/docs/csharp_deploy/images/10.png
  17. 0 0
      deploy/cpp/docs/csharp_deploy/images/11.png
  18. 0 0
      deploy/cpp/docs/csharp_deploy/images/12.png
  19. 0 0
      deploy/cpp/docs/csharp_deploy/images/13.png
  20. 0 0
      deploy/cpp/docs/csharp_deploy/images/14.png
  21. 0 0
      deploy/cpp/docs/csharp_deploy/images/15.png
  22. 0 0
      deploy/cpp/docs/csharp_deploy/images/16.png
  23. 0 0
      deploy/cpp/docs/csharp_deploy/images/17.png
  24. 0 0
      deploy/cpp/docs/csharp_deploy/images/18.png
  25. 0 0
      deploy/cpp/docs/csharp_deploy/images/19.png
  26. 0 0
      deploy/cpp/docs/csharp_deploy/images/2.png
  27. 0 0
      deploy/cpp/docs/csharp_deploy/images/20.png
  28. 0 0
      deploy/cpp/docs/csharp_deploy/images/21.png
  29. 0 0
      deploy/cpp/docs/csharp_deploy/images/22.png
  30. 0 0
      deploy/cpp/docs/csharp_deploy/images/23.png
  31. 0 0
      deploy/cpp/docs/csharp_deploy/images/24.png
  32. 0 0
      deploy/cpp/docs/csharp_deploy/images/25.png
  33. 二进制
      deploy/cpp/docs/csharp_deploy/images/26.png
  34. 二进制
      deploy/cpp/docs/csharp_deploy/images/27.png
  35. 二进制
      deploy/cpp/docs/csharp_deploy/images/28.png
  36. 二进制
      deploy/cpp/docs/csharp_deploy/images/29.png
  37. 0 0
      deploy/cpp/docs/csharp_deploy/images/3.png
  38. 0 0
      deploy/cpp/docs/csharp_deploy/images/4.png
  39. 0 0
      deploy/cpp/docs/csharp_deploy/images/5.png
  40. 0 0
      deploy/cpp/docs/csharp_deploy/images/6.png
  41. 0 0
      deploy/cpp/docs/csharp_deploy/images/7.png
  42. 0 0
      deploy/cpp/docs/csharp_deploy/images/8.5.png
  43. 0 0
      deploy/cpp/docs/csharp_deploy/images/8.png
  44. 0 0
      deploy/cpp/docs/csharp_deploy/images/9.png
  45. 70 63
      deploy/cpp/docs/csharp_deploy/model_infer.cpp
  46. 1 1
      deploy/cpp/docs/demo/decrypt_infer.md
  47. 33 0
      deploy/cpp/docs/demo/model_infer.md
  48. 9 0
      deploy/cpp/docs/file_format.md
  49. 二进制
      deploy/cpp/docs/images/cmakelist_set.png
  50. 二进制
      deploy/cpp/docs/images/cpu_infer.png
  51. 二进制
      deploy/cpp/docs/images/deploy_build_sh.png
  52. 二进制
      deploy/cpp/docs/images/infer_demo_cmakelist.png
  53. 二进制
      deploy/cpp/docs/images/paddleinference_filelist.png
  54. 二进制
      deploy/cpp/docs/images/show_menu.png
  55. 二进制
      deploy/cpp/docs/images/tensorrt.png
  56. 3 3
      deploy/cpp/docs/manufacture_sdk/README.md
  57. 2 1
      deploy/cpp/docs/models/paddledetection.md
  58. 3 2
      deploy/cpp/docs/models/paddleseg.md
  59. 4 5
      deploy/cpp/docs/models/paddlex.md
  60. 39 0
      deploy/cpp/scripts/jetson_build.sh
  61. 6 0
      deploy/python/README.md
  62. 0 1026
      deploy/resources/resnet50_imagenet.yml
  63. 1 0
      docs/CHANGELOG.md
  64. 二进制
      docs/Resful_API/images/1.png
  65. 二进制
      docs/apis/images/test.jpg
  66. 10 20
      docs/apis/images/yolo_predict.jpg
  67. 7 17
      docs/data/annotation/README.md
  68. 1 1
      docs/data/annotation/classification.md
  69. 1 1
      docs/data/annotation/object_detection.md
  70. 3 3
      docs/data/format/classification.md
  71. 20 0
      docs/gui/README.md
  72. 0 20
      docs/gui/download.md
  73. 二进制
      docs/gui/how_to_use.md
  74. 二进制
      docs/gui/images/QR2.png
  75. 0 40
      docs/gui/introduce.md
  76. 1 1
      docs/gui/restful/data_struct.md
  77. 0 84
      docs/install.md
  78. 二进制
      docs/paddlex.png
  79. 二进制
      docs/parameters.md
  80. 1 1
      docs/python_deploy.md
  81. 78 35
      docs/quick_start_API.md
  82. 158 0
      docs/quick_start_GUI.md
  83. 1 0
      docs/quick_start_Resful_API.md
  84. 二进制
      examples/C#_deploy/images/26.png
  85. 二进制
      examples/C#_deploy/images/27.png
  86. 二进制
      examples/C#_deploy/images/28.png
  87. 二进制
      examples/C#_deploy/images/29.png
  88. 0 2
      examples/README.md
  89. 1 1
      examples/defect_detection/README.md
  90. 201 0
      examples/helmet_detection/LICENSE
  91. 203 0
      examples/helmet_detection/README.md
  92. 88 0
      examples/helmet_detection/accuracy_improvement.md
  93. 47 0
      examples/helmet_detection/code/infer.py
  94. 68 0
      examples/helmet_detection/code/train.py
  95. 二进制
      examples/helmet_detection/images/1.png
  96. 二进制
      examples/helmet_detection/images/10.png
  97. 二进制
      examples/helmet_detection/images/11.png
  98. 二进制
      examples/helmet_detection/images/12.png
  99. 二进制
      examples/helmet_detection/images/13.png
  100. 二进制
      examples/helmet_detection/images/14.png

+ 76 - 87
README.md

@@ -1,133 +1,122 @@
-# PaddleX全面升级动态图,v2.0.0正式发布!
-
-
-
 <p align="center">
   <img src="./docs/gui/images/paddlex.png" width="360" height ="55" alt="PaddleX" align="middle" />
 </p>
  <p align= "center"> PaddleX -- 飞桨全流程开发工具,以低代码的形式支持开发者快速实现产业实际项目落地 </p>
 
-## :heart:重磅功能升级
-### 全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署。[欢迎体验](./deploy/cpp/docs/manufacture_sdk)
-
-### PaddleX部署全面升级,支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的端到端统一部署能力。[欢迎体验](./deploy/cpp)
-
-
-### 发布产业实践案例:钢筋计数、缺陷检测、机械手抓取、工业表计读数、Windows系统下使用C#语言部署。[欢迎体验](./examples)
+<p align="left">
+    <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-red.svg"></a>
+    <a href="https://github.com/PaddlePaddle/PaddleOCR/releases"><img src="https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg"></a>
+    <a href=""><img src="https://img.shields.io/badge/python-3.6+-orange.svg"></a>
+    <a href=""><img src="https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg"></a>
+    <a href=""><img src="https://img.shields.io/badge/QQ_Group-957286141-52B6EF?style=social&logo=tencent-qq&logoColor=000&logoWidth=20"></a>
+</p>
 
-### 升级PaddleX GUI,支持30系列显卡、新增模型PP-YOLO V2、PP-YOLO Tiny 、BiSeNetV2,新增导出API训练脚本功能,无缝切换PaddleX API训练。[欢迎体验](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/install.md#2-padldex-gui%E5%BC%80%E5%8F%91%E6%A8%A1%E5%BC%8F%E5%AE%89%E8%A3%85)
+## 近期动态
+2021.09.10 PaddleX发布2.0.0正式版本。
+- 全新发布Manufacture SDK,支持多模型串联部署。[欢迎体验](./deploy/cpp/docs/manufacture_sdk)
+- PaddleX部署全面升级,支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的端到端统一部署能力。[欢迎体验](./deploy/cpp/docs/deployment.md)
+- 发布产业实践案例:钢筋计数、缺陷检测、机械手抓取、工业表计读数。[欢迎体验](./examples)
+- 升级PaddleX GUI,支持30系列显卡、新增模型PP-YOLO V2、PP-YOLO Tiny 、BiSeNetV2。[欢迎体验](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/install.md#2-padldex-gui%E5%BC%80%E5%8F%91%E6%A8%A1%E5%BC%8F%E5%AE%89%E8%A3%85)
 
-[![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE) [![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg)](https://github.com/PaddlePaddle/PaddleX/releases) ![python version](https://img.shields.io/badge/python-3.6+-orange.svg) ![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
- ![QQGroup](https://img.shields.io/badge/QQ_Group-1045148026-52B6EF?style=social&logo=tencent-qq&logoColor=000&logoWidth=20)
+详情内容请参考[版本更新文档](./docs/CHANGELOG.md)。
 
+## 产品介绍
 :hugs: PaddleX 集成飞桨智能视觉领域**图像分类**、**目标检测**、**语义分割**、**实例分割**任务能力,将深度学习开发全流程从**数据准备**、**模型训练与优化**到**多端部署**端到端打通,并提供**统一任务API接口**及**图形化开发界面Demo**。开发者无需分别安装不同套件,以**低代码**的形式即可快速完成飞桨全流程开发。
 
 :factory: **PaddleX** 经过**质检**、**安防**、**巡检**、**遥感**、**零售**、**医疗**等十多个行业实际应用场景验证,沉淀产业实际经验,**并提供丰富的案例实践教程**,全程助力开发者产业实践落地。
 
-![](../docs/gui/images/paddlexoverview.png)
-
-
-## PaddleX 使用文档
-
-
-### 1. 快速上手PaddleX
+<p align="center">
+  <img src="./docs/paddlex_whole.png" width="800"  />
+</p>
 
-* [快速安装PaddleX](./docs/install.md)
-  * [PaddleX API开发模式安装](./docs/install.md#1-paddlex-api开发模式安装)
-  * [PadldeX GUI开发模式安装](./docs/install.md#2-padldex-gui开发模式安装)
-  * [PaddleX Restful开发模式安装](./docs/install.md#3-paddlex-restful开发模式安装)
-* [10分钟快速上手使用](./docs/quick_start.md)
-* [AIStudio在线项目示例](https://aistudio.baidu.com/aistudio/projectdetail/2159977)
-* [常见问题汇总](./docs/FAQ/FAQ.md)
+## 安装与快速体验
+PaddleX提供了图像化开发界面、本地API、Restful-API三种开发模式。用户可根据自己的需求选择任意一种开始体验
+- [PadldeX GUI开发模式](./docs/quick_start_GUI.md)
+- [PaddleX API开发模式](./docs/quick_start_API.md)
+- [PaddleX Restful API开发模式](./docs/Resful_API/docs/readme.md)
+- [快速产业部署](#4-模型部署)
 
+## 产业级应用示例
 
-### 2. 数据准备
+- 安防
+    - [安全帽检测](./examples/helmet_detection)  
+- 工业视觉
+    - [表计读数](./examples/meter_reader)  |  [钢筋计数](./examples/rebar_count)  |  [视觉辅助定位抓取](./examples/robot_grab)
 
-* [数据格式说明](./docs/data/format/README.md)
-* [标注工具LabelMe的安装和启动](./docs/data/annotation/labelme.md)
-* [数据标注](./docs/data/annotation/README.md)
-  * [手机拍照图片旋转](./docs/data/annotation/README.md)
-  * [开始数据标注](./docs/data/annotation/README.md)
-* [数据格式转换](./docs/data/convert.md)
-* [数据划分](./docs/data/split.md)
 
 
-### 3. 模型训练/评估/预测
+## PaddleX 使用文档
+本文档介绍了PaddleX从数据准备、模型训练到模型剪裁量化,及最终部署的全流程使用方法。
+<p align="center">
+  <img src="./docs/process.png" width="600"  />
+</p>
 
-* **PaddleX API开发模式:**
+### 1. 数据准备
 
-    * [API文档](./docs/apis)
-      * [数据集读取API](./docs/apis/datasets.md)
-      * [数据预处理和数据增强API](./docs/apis/transforms/transforms.md)
-      * [模型API/模型加载API](./docs/apis/models/README.md)
-      * [预测结果可视化API](./docs/apis/visualize.md)
-    * [模型训练与参数调整](tutorials/train)
-      * [模型训练](tutorials/train)
-      * [训练参数调整](./docs/parameters.md)
-    * [VisualDL可视化训练指标](./docs/visualdl.md)
-    * [加载训好的模型完成预测及预测结果可视化](./docs/apis/prediction.md)
+- [数据准备流程说明](./docs/data)
+- [数据标注](./docs/data/annotation/README.md)
+- [数据格式转换](./docs/data/convert.md)
+- [数据划分](./docs/data/split.md)
 
-* **PaddleX GUI开发模式:**
+### 2. 模型训练/评估/预测
 
-    - [图像分类](https://www.bilibili.com/video/BV1nK411F7J9?from=search&seid=3068181839691103009)
-    - [目标检测](https://www.bilibili.com/video/BV1HB4y1A73b?from=search&seid=3068181839691103009)
-    - [实例分割](https://www.bilibili.com/video/BV1M44y1r7s6?from=search&seid=3068181839691103009)
-    - [语义分割](https://www.bilibili.com/video/BV1qQ4y1Z7co?from=search&seid=3068181839691103009)
+- [GUI开发模式](./docs/quick_start_GUI.md)
+  - 视频教程:[图像分类](./docs/quick_start_GUI.md/#视频教程) | [目标检测](./docs/quick_start_GUI.md/#视频教程) | [语义分割](./docs/quick_start_GUI.md/#视频教程) | [实例分割](./docs/quick_start_GUI.md/#视频教程)
+- API开发模式
+  - [API文档](./docs/apis)
+    - [数据集读取API](./docs/apis/datasets.md)
+    - [数据预处理和数据增强API](./docs/apis/transforms/transforms.md)
+    - [模型API/模型加载API](./docs/apis/models/README.md)
+    - [预测结果可视化API](./docs/apis/visualize.md)
+  - [模型训练与参数调整](tutorials/train)
+    - [模型训练](tutorials/train)
+    - [训练参数调整](./docs/parameters.md)
+  - [VisualDL可视化训练指标](./docs/visualdl.md)
+  - [加载训好的模型完成预测及预测结果可视化](./docs/apis/prediction.md)
+- [Restful API开发模式](./docs/Resful_API/docs)
+  - [使用说明](./docs/Resful_API/docs)
 
 
-### 4. 模型剪裁和量化
+### 3. 模型压缩
 
 - [模型剪裁](tutorials/slim/prune)
 - [模型量化](tutorials/slim/quantize)
 
-### 5. 模型部署
+### 4. 模型部署
 
 - [部署模型导出](./docs/apis/export_model.md)
-- [PaddleX python高性能部署](./docs/python_deploy.md)
-- [PaddleX Manufacture SDK低代码高效C++部署](./deploy/cpp/docs/manufacture_sdk)
-- [PaddleX/PaddleClas/PaddleDetection/PaddleSeg端到端高性能统一C++部署](./deploy/cpp)
-- [PaddleX python轻量级服务化部署](./docs/hub_serving_deploy.md)
-
-### 6. 产业级应用示例
-
-- [钢筋计数](examples/rebar_count)
-- [缺陷检测](examples/defect_detection)
-- [机械手抓取](examples/robot_grab)
-- [工业表计读数](examples/meter_reader)
-- [Windows系统下使用C#语言部署](examples/C%23_deploy)
-
-### 7. 附录
+- [部署方式概览](./deploy/README.md)
+  - 本地部署
+    - C++部署
+      - [C++源码编译](./deploy/cpp/README.md)
+      - [C#工程化示例](./deploy/cpp/docs/csharp_deploy)
+    - [Python部署](./docs/python_deploy.md)
+  - 服务化部署
+    - [HubServing部署(Python)](./docs/hub_serving_deploy.md)
+  - [基于ONNX部署(C++)](./deploy/cpp/docs/compile/README.md)
+    - [OpenVINO推理引擎](./deploy/cpp/docs/compile/openvino/README.md)
+    - [Triton部署](./deploy/cpp/docs/compile/triton/docker.md)
+- [模型加密](./deploy/cpp/docs/demo/decrypt_infer.md)
+
+### 5. 附录
 
 - [PaddleX模型库](./docs/appendix/model_zoo.md)
 - [PaddleX指标及日志](./docs/appendix/metrics.md)
 - [无联网模型训练](./docs/how_to_offline_run.md)
 
-## 版本更新
-
-- **2021.09.10 v2.0.0**
-
-  PaddleX 2.0动态图版本正式发布,PaddleX API、PaddleX GUI开发模式全面支持飞桨2.0动态图。PaddleX GUI新增导出API训练脚本功能,无缝切换PaddleX API训练。PaddleX python预测部署完备, PaddleX模型使用2个API即可快速完成部署。详细内容请参考[版本更新文档](./docs/CHANGELOG.md)
-
-- **2021.07.06 v2.0.0-rc3**
-
-  PaddleX部署全面升级,支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的端到端统一部署能力。全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署。发布产业实践案例:钢筋计数、缺陷检测、机械手抓取、工业表计读数、Windows系统下使用C#语言部署。升级PaddleX GUI,支持30系列显卡、新增模型PP-YOLO V2、PP-YOLO Tiny 、BiSeNetV2。详细内容请参考[版本更新文档](./docs/CHANGELOG.md)
-
-- **2021.05.19 v2.0.0-rc**
-
-  全面支持飞桨2.0动态图,更易用的开发模式。 目标检测任务新增PP-YOLOv2, COCO test数据集精度达到49.5%、V100预测速度达到68.9 FPS。目标检测任务新增4.2MB的超轻量级模型PP-YOLO tiny。语义分割任务新增实时分割模型BiSeNetV2。C++部署模块全面升级,PaddleInference部署适配2.0预测库,支持飞桨PaddleDetection、PaddleSeg、PaddleClas以及PaddleX的模型部署;新增基于PaddleInference的GPU多卡预测;GPU部署新增基于ONNX的的TensorRT高性能加速引擎部署方式;GPU部署新增基于ONNX的Triton服务化部署方式。详情内容请参考[版本更新文档](./docs/CHANGELOG.md)。
-
+## 常见问题汇总
+- [GUI相关问题](./docs/FAQ/FAQ.md/#GUI相关问题)
+- [API训练相关问题](./docs/FAQ/FAQ.md/#API训练相关问题)
+- [推理部署问题](./docs/FAQ/FAQ.md/#推理部署问题)
 
 ## 交流与反馈
 
 - 项目官网:https://www.paddlepaddle.org.cn/paddle/paddlex
-
 - PaddleX用户交流群:957286141 (手机QQ扫描如下二维码快速加入)  
-
   <p align="center">
-    <img src="./docs/gui/images/QR2.jpg" width="250" height ="360" alt="QR" align="middle" />
+    <img src="./docs/gui/images/QR2.png" width="180" height ="180" alt="QR" align="middle" />
   </p>
 
-
 ## :hugs: 贡献代码:hugs:
 
 我们非常欢迎您为PaddleX贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交Pull Requests。

+ 14 - 0
deploy/README.md

@@ -0,0 +1,14 @@
+# 部署方式概览
+
+PaddleX提供了多种部署方式,用户可根据实际需要选择本地部署、边缘侧部署、服务化部署、Docker部署。部署方式目录如下:
+
+  - 本地部署
+    - C++部署
+      - [C++源码编译](./../deploy/cpp/README.md)
+      - [C#工程化示例](./../deploy/cpp/docs/csharp_deploy)
+    - [Python部署](./../docs/python_deploy.md)
+  - 服务化部署
+    - [HubServing部署(Python)](./../docs/hub_serving_deploy.md)
+  - [基于ONNX部署(C++)](./../deploy/cpp/docs/compile/README.md)
+    - [OpenVINO推理引擎](./../deploy/cpp/docs/compile/openvino/README.md)
+    - [Triton部署](./../deploy/cpp/docs/compile/triton/docker.md)

+ 11 - 37
deploy/cpp/README.md

@@ -1,44 +1,18 @@
-## PaddlePaddle模型C++部署
+# C++部署
 
-本目录下代码,目前支持以下飞桨官方套件基于PaddleInference的部署。
 
-## 模型套件支持
-- PaddleDetection([release/2.1](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1))
-- PaddleSeg([release/2.1](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1))
-- PaddleClas([release/2.1](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1))
-- PaddleX([release/2.0-rc](https://github.com/PaddlePaddle/PaddleX))
+基于飞桨原生推理库PaddleInference,PaddleX推出了统一部署编译方式**PaddleX-Deploy**。
 
-## 硬件支持
-- CPU(linux/windows)
-- GPU(linux/windows)
-- Jetson(TX2/Nano/Xavier)
+**PaddleX-Deploy**提供了强大的部署性能,可同时兼容飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX统一部署,支持Windows、Linux等多种系统。同时提供了工业级别的C#部署工程示例。
 
-## 文档
-### PaddleInference编译说明
-- [Linux编译(支持加密)指南](./docs/compile/paddle/linux.md)
-- [Windows编译(支持加密)指南](./docs/compile/paddle/windows.md)
-- [Jetson编译指南](./docs/compile/paddle/jetson.md)
+- [PaddleX Deployment部署方式说明](./docs/deployment.md)
+- [C#部署工程示例](./docs/csharp_deploy)
+---
+为更进一步地提升部署效率,PaddleX部署发布[Manufacture SDK](./docs/manufacture_sdk),提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK)。
 
-### 模型部署说明
-- [PaddleX部署指南](./docs/models/paddlex.md)
-- [PaddleDetection部署指南](./docs/models/paddledetection.md)
-- [PaddleSeg部署指南](./docs/models/paddleseg.md)
-- [PaddleClas部署指南](./docs/models/paddleclas.md)
+- 通过配置业务逻辑流程文件即可以**低代码**方式快速完成推理部署。
 
-### 模型预测示例
-- [单卡加载模型预测示例](./docs/demo/model_infer.md)
-- [多卡加载模型预测示例](./docs/demo/multi_gpu_model_infer.md)
-- [PaddleInference集成TensorRT加载模型预测示例](./docs/demo/tensorrt_infer.md)
-- [模型加密预测示例](./docs/demo/decrypt_infer.md)
+<div align="center">
+<img src="./docs/manufacture_sdk/images/pipeline_det.png"  width = "500" />              </div>
 
-### API说明
-
-- [部署相关API说明](./docs/apis/model.md)
-- [模型配置文件说明](./docs/apis/yaml.md)
-
-
-## ONNX模型部署
-Paddle的模型除了直接通过PaddleInference部署外,还可以通过[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX.git)转为ONNX后使用第三方推理引擎进行部署,在本目录下,我们提供了基于OpenVINO、Triton和TensorRT三个引擎的部署支持。
-- [OpenVINO部署](./docs/compile/openvino/README.md)
-- [Triton部署](./docs/compile/triton/docker.md)
-- [TensorRT部署](./docs/compile/tensorrt/trt.md)
+- 通过配置文件,提供了**多模型串联**部署的方式,满足更多生产环境要求,相关使用参考[工业表计读数](./../../examples/meter_reader)。

+ 9 - 0
deploy/cpp/docs/apis/model.md

@@ -0,0 +1,9 @@
+## PaddleInference模型部署
+- [Linux编译(支持加密)指南](./paddle/linux.md)
+- [Windows编译(支持加密)指南](./paddle/windows.md)
+
+## ONNX模型部署
+Paddle的模型除了直接通过PaddleInference部署外,还可以通过[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX.git)转为ONNX后使用第三方推理引擎进行部署,在本目录下,我们提供了基于OpenVINO、Triton和TensorRT三个引擎的部署支持。
+- [OpenVINO部署](./openvino/README.md)
+- [Triton部署](./triton/docker.md)
+- [TensorRT部署](./tensorrt/trt.md)

+ 1 - 1
deploy/cpp/docs/compile/openvino/README.md

@@ -2,7 +2,7 @@
 
 本文档指引用户如何基于OpenVINO对飞桨模型进行推理,并编译执行。进行以下编译操作前请先安装好OpenVINO,OpenVINO安装请参考官网[OpenVINO-Linux](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html)
 
-**注意:** 
+**注意:**
 
 - 我们测试的openvino版本为2021.3,如果你使用其它版本遇到问题,可以尝试切换到该版本
 - 当前检测模型转换为openvino格式是有问题的,暂时只支持分割和分类模型

+ 0 - 1
deploy/cpp/docs/compile/openvino/openvino_windows.md

@@ -1 +0,0 @@
-# 基于PaddleInference的推理-Jetson环境编译

+ 4 - 4
deploy/cpp/docs/compile/paddle/linux.md → deploy/cpp/docs/compile/paddle/linux.md

@@ -84,9 +84,9 @@
 <img src="./images/8.5.png"  width = "800" />             </div>
 ### 3.2 修改model_infer.cpp并重新生成dll
 
-* 修改后的model_infer.cpp已经提供,请用paddleX/examples/C#_deploy/model_infer.cpp文件替换PaddleX/deploy/cpp/demo/model_infer.cpp
+* 修改后的model_infer.cpp已经提供,请用paddleX/deploy/cpp/docs/csharp_deploy/model_infer.cpp文件替换PaddleX/deploy/cpp/demo/model_infer.cpp
 
-### 3.3 创建一个c#项目并调用dll
+### 3.3 创建一个C#项目并调用dll
 
 * 目前已经给出了C#项目,支持PaddleX PaddleClas PaddleDetection PaddleSeg的模型去预测,为了方便大家使用,提供了在单张图片/多张图片/视频流预测形式。支持实时显示预测时间,支持预测GPU和CPU分别预测。
 * 用户只需要运行.sln文件即可呈现如下文件形式:
@@ -107,7 +107,7 @@
 <img src="./images/18.png"  width = "800" />             </div>
 
 * 此外需保证在C#项目的bin\x64\Debug\net5.0-windows下包含以下dll,再进行预测推理
- 
+
   - opencv_world346.dll, 位于下载的opencv文件夹: opencv\build\x64\vc15\bin
   - model_infer.dll, 位于上边cmkae编译的目录下: PaddleX\deploy\cpp\out\paddle_deploy\Release
   - 其余dll, 位于以下目录: PaddleX\deploy\cpp\out\paddle_deploy
@@ -131,7 +131,7 @@ MaskRCNN实例分割:
 <div align="center">
 <img src="./images/23.png"  width = "800" />             </div>
 
-### 3.4 c#项目:可视化界面功能简要描述
+### 3.4 C#项目:可视化界面功能简要描述
 
 - 1.可加载PaddleSeg, PaddleClas, PaddleDetection以及PaddleX导出的部署模型, 分别对应模型选择中的: seg, clas, det, paddlex
 - 2.目前也支持GPU下加载MaskRCNN进行实例分割可视化推理,需选择模型: mask

+ 0 - 0
examples/C#_deploy/C#/Form1.Designer.cs → deploy/cpp/docs/csharp_deploy/c#/Form1.Designer.cs


文件差异内容过多而无法显示
+ 215 - 274
deploy/cpp/docs/csharp_deploy/c#/Form1.cs


+ 0 - 0
examples/C#_deploy/C#/Form1.resx → deploy/cpp/docs/csharp_deploy/c#/Form1.resx


+ 0 - 0
examples/C#_deploy/C#/Program.cs → deploy/cpp/docs/csharp_deploy/c#/Program.cs


+ 0 - 0
examples/C#_deploy/C#/WinFormsApp_final.csproj → deploy/cpp/docs/csharp_deploy/c#/WinFormsApp_final.csproj


+ 0 - 0
examples/C#_deploy/C#/WinFormsApp_final.csproj.user → deploy/cpp/docs/csharp_deploy/c#/WinFormsApp_final.csproj.user


+ 0 - 0
examples/C#_deploy/C#/WinFormsApp_final.sln → deploy/cpp/docs/csharp_deploy/c#/WinFormsApp_final.sln


+ 0 - 0
examples/C#_deploy/images/1.png → deploy/cpp/docs/csharp_deploy/images/1.png


+ 0 - 0
examples/C#_deploy/images/10.png → deploy/cpp/docs/csharp_deploy/images/10.png


+ 0 - 0
examples/C#_deploy/images/11.png → deploy/cpp/docs/csharp_deploy/images/11.png


+ 0 - 0
examples/C#_deploy/images/12.png → deploy/cpp/docs/csharp_deploy/images/12.png


+ 0 - 0
examples/C#_deploy/images/13.png → deploy/cpp/docs/csharp_deploy/images/13.png


+ 0 - 0
examples/C#_deploy/images/14.png → deploy/cpp/docs/csharp_deploy/images/14.png


+ 0 - 0
examples/C#_deploy/images/15.png → deploy/cpp/docs/csharp_deploy/images/15.png


+ 0 - 0
examples/C#_deploy/images/16.png → deploy/cpp/docs/csharp_deploy/images/16.png


+ 0 - 0
examples/C#_deploy/images/17.png → deploy/cpp/docs/csharp_deploy/images/17.png


+ 0 - 0
examples/C#_deploy/images/18.png → deploy/cpp/docs/csharp_deploy/images/18.png


+ 0 - 0
examples/C#_deploy/images/19.png → deploy/cpp/docs/csharp_deploy/images/19.png


+ 0 - 0
examples/C#_deploy/images/2.png → deploy/cpp/docs/csharp_deploy/images/2.png


+ 0 - 0
examples/C#_deploy/images/20.png → deploy/cpp/docs/csharp_deploy/images/20.png


+ 0 - 0
examples/C#_deploy/images/21.png → deploy/cpp/docs/csharp_deploy/images/21.png


+ 0 - 0
examples/C#_deploy/images/22.png → deploy/cpp/docs/csharp_deploy/images/22.png


+ 0 - 0
examples/C#_deploy/images/23.png → deploy/cpp/docs/csharp_deploy/images/23.png


+ 0 - 0
examples/C#_deploy/images/24.png → deploy/cpp/docs/csharp_deploy/images/24.png


+ 0 - 0
examples/C#_deploy/images/25.png → deploy/cpp/docs/csharp_deploy/images/25.png


二进制
deploy/cpp/docs/csharp_deploy/images/26.png


二进制
deploy/cpp/docs/csharp_deploy/images/27.png


二进制
deploy/cpp/docs/csharp_deploy/images/28.png


二进制
deploy/cpp/docs/csharp_deploy/images/29.png


+ 0 - 0
examples/C#_deploy/images/3.png → deploy/cpp/docs/csharp_deploy/images/3.png


+ 0 - 0
examples/C#_deploy/images/4.png → deploy/cpp/docs/csharp_deploy/images/4.png


+ 0 - 0
examples/C#_deploy/images/5.png → deploy/cpp/docs/csharp_deploy/images/5.png


+ 0 - 0
examples/C#_deploy/images/6.png → deploy/cpp/docs/csharp_deploy/images/6.png


+ 0 - 0
examples/C#_deploy/images/7.png → deploy/cpp/docs/csharp_deploy/images/7.png


+ 0 - 0
examples/C#_deploy/images/8.5.png → deploy/cpp/docs/csharp_deploy/images/8.5.png


+ 0 - 0
examples/C#_deploy/images/8.png → deploy/cpp/docs/csharp_deploy/images/8.png


+ 0 - 0
examples/C#_deploy/images/9.png → deploy/cpp/docs/csharp_deploy/images/9.png


+ 70 - 63
examples/C#_deploy/model_infer.cpp → deploy/cpp/docs/csharp_deploy/model_infer.cpp

@@ -1,29 +1,30 @@
-#include <gflags/gflags.h>
 #include <string>
 #include <vector>
 
 #include "model_deploy/common/include/paddle_deploy.h"
 
+// Global model pointer
 PaddleDeploy::Model* model;
 
 /*
-* 模型初始化/注册接口
-* 
-* model_type: 初始化模型类型: det,seg,clas,paddlex
-* 
-* model_filename: 模型文件路径
-* 
-* params_filename: 参数文件路径
-* 
-* cfg_file: 配置文件路径
-* 
-* use_gpu: 是否使用GPU
-* 
-* gpu_id: 指定第x号GPU
-* 
-* paddlex_model_type: model_type为paddlx时,返回的实际paddlex模型的类型: det, seg, clas
+* Model initialization / registration API
+*
+* model_type: det,seg,clas,paddlex
+*
+* model_filename: Model file path
+*
+* params_filename: Parameter file path
+*
+* cfg_file: Configuration file path
+*
+* use_gpu: Whether to use GPU
+*
+* gpu_id: Specify GPU x
+*
+* paddlex_model_type: When Model_Type is paddlx, the type of actual Paddlex model returned - det, seg, clas
+*
 */
-extern "C" __declspec(dllexport) void InitModel(const char* model_type, const char* model_filename, const char* params_filename, const char* cfg_file, bool use_gpu, int gpu_id, char* paddlex_model_type)
+extern "C" void InitModel(const char* model_type, const char* model_filename, const char* params_filename, const char* cfg_file, bool use_gpu, int gpu_id, char* paddlex_model_type)
 {
 	// create model
 	model = PaddleDeploy::CreateModel(model_type);  //FLAGS_model_type
@@ -44,7 +45,7 @@ extern "C" __declspec(dllexport) void InitModel(const char* model_type, const ch
 	}
 
 	// det, seg, clas, paddlex
-	if (strcmp(model_type, "paddlex") == 0) // 是paddlex模型,则返回具体支持的模型类型: det, seg, clas
+	if (strcmp(model_type, "paddlex") == 0) // If it is a PADDLEX model, return the specifically supported model type: det, seg, clas
 	{
 		// detector
 		if (model->yaml_config_["model_type"].as<std::string>() == std::string("detector"))
@@ -60,12 +61,12 @@ extern "C" __declspec(dllexport) void InitModel(const char* model_type, const ch
 			strcpy(paddlex_model_type, "clas");
 		}
 	}
-} 
+}
 
 
 /*
-* 检测推理接口
-* 
+* Detection inference API
+*
 * img: input for predicting.
 *
 * nWidth: width of img.
@@ -79,8 +80,10 @@ extern "C" __declspec(dllexport) void InitModel(const char* model_type, const ch
 * nBoxesNum£º number of box
 *
 * LabelList: label list of result
+*
+* extern "C"
 */
-extern "C" __declspec(dllexport) void Det_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, float* output, int* nBoxesNum, char* LabelList)
+extern "C" void Det_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, float* output, int* nBoxesNum, char* LabelList)
 {
 	// prepare data
 	std::vector<cv::Mat> imgs;
@@ -98,28 +101,26 @@ extern "C" __declspec(dllexport) void Det_ModelPredict(const unsigned char* img,
 
 	cv::Mat input = cv::Mat::zeros(cv::Size(nWidth, nHeight), nType);
 	memcpy(input.data, img, nHeight * nWidth * nChannel * sizeof(uchar));
-	//cv::imwrite("./1.png", input);
 	imgs.push_back(std::move(input));
 
 	// predict
 	std::vector<PaddleDeploy::Result> results;
 	model->Predict(imgs, &results, 1);
 
-	// nBoxesNum[0] = results.size();  // results.size()得到的是batch_size
-	nBoxesNum[0] = results[0].det_result->boxes.size();  // 得到单张图片预测的bounding box数
+	// nBoxesNum[0] = results.size();  // results.size() is returning batch_size
+	nBoxesNum[0] = results[0].det_result->boxes.size();  // Get the predicted Bounding Box number of a single image
 	std::string label = "";
 	//std::cout << "res: " << results[num] << std::endl;
-	for (int i = 0; i < results[0].det_result->boxes.size(); i++)  // 得到所有框的数据
+	for (int i = 0; i < results[0].det_result->boxes.size(); i++)  // Get the data for all the boxes
 	{
-		//std::cout << "category: " << results[num].det_result->boxes[i].category << std::endl;
 		label = label + results[0].det_result->boxes[i].category + " ";
 		// labelindex
-		output[i * 6 + 0] = results[0].det_result->boxes[i].category_id; // 类别的id
+		output[i * 6 + 0] = results[0].det_result->boxes[i].category_id; // Category ID
 		// score
-		output[i * 6 + 1] = results[0].det_result->boxes[i].score;  // 得分
+		output[i * 6 + 1] = results[0].det_result->boxes[i].score;  // Score
 		//// box
-		output[i * 6 + 2] = results[0].det_result->boxes[i].coordinate[0]; // x1, y1, x2, y2
-		output[i * 6 + 3] = results[0].det_result->boxes[i].coordinate[1]; // 左上、右下的顶点
+		output[i * 6 + 2] = results[0].det_result->boxes[i].coordinate[0]; // x1, y1, w, h
+		output[i * 6 + 3] = results[0].det_result->boxes[i].coordinate[1]; // Upper left and lower right vertices
 		output[i * 6 + 4] = results[0].det_result->boxes[i].coordinate[2];
 		output[i * 6 + 5] = results[0].det_result->boxes[i].coordinate[3];
 	}
@@ -128,8 +129,8 @@ extern "C" __declspec(dllexport) void Det_ModelPredict(const unsigned char* img,
 
 
 /*
-* 分割推理接口
-* 
+* Segmented inference
+*
 * img: input for predicting.
 *
 * nWidth: width of img.
@@ -139,8 +140,10 @@ extern "C" __declspec(dllexport) void Det_ModelPredict(const unsigned char* img,
 * nChannel: channel of img.
 *
 * output: result of pridict ,include label_map
+*
+* extern "C"
 */
-extern "C" __declspec(dllexport) void Seg_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, unsigned char* output)
+extern "C" void Seg_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, unsigned char* output)
 {
 	// prepare data
 	std::vector<cv::Mat> imgs;
@@ -158,22 +161,21 @@ extern "C" __declspec(dllexport) void Seg_ModelPredict(const unsigned char* img,
 
 	cv::Mat input = cv::Mat::zeros(cv::Size(nWidth, nHeight), nType);
 	memcpy(input.data, img, nHeight * nWidth * nChannel * sizeof(uchar));
-	//cv::imwrite("./1.png", input);
 	imgs.push_back(std::move(input));
 
 	// predict
 	std::vector<PaddleDeploy::Result> results;
 	model->Predict(imgs, &results, 1);
 
-	std::vector<uint8_t> result_map = results[0].seg_result->label_map.data; // vector<uint8_t> -- 结果map
-	// 拷贝输出结果到输出上返回 -- 将vector<uint8_t>转成unsigned char *
+	std::vector<uint8_t> result_map = results[0].seg_result->label_map.data; // vector<uint8_t> -- Result Map
+	// Copy output result to the output back -- from vector<uint8_t> to unsigned char *
 	memcpy(output, &result_map[0], result_map.size() * sizeof(uchar));
 }
 
 
 /*
-* 识别推理接口
-* 
+* Recognition inference API
+*
 * img: input for predicting.
 *
 * nWidth: width of img.
@@ -183,12 +185,14 @@ extern "C" __declspec(dllexport) void Seg_ModelPredict(const unsigned char* img,
 * nChannel: channel of img.
 *
 * score: result of pridict ,include score
-* 
+*
 * category: result of pridict ,include category_string
-* 
+*
 * category_id: result of pridict ,include category_id
+*
+* extern "C"
 */
-extern "C" __declspec(dllexport) void Cls_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, float* score, char* category, int* category_id)
+extern "C" void Cls_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, float* score, char* category, int* category_id)
 {
 	// prepare data
 	std::vector<cv::Mat> imgs;
@@ -206,7 +210,6 @@ extern "C" __declspec(dllexport) void Cls_ModelPredict(const unsigned char* img,
 
 	cv::Mat input = cv::Mat::zeros(cv::Size(nWidth, nHeight), nType);
 	memcpy(input.data, img, nHeight * nWidth * nChannel * sizeof(uchar));
-	//cv::imwrite("./1.png", input);
 	imgs.push_back(std::move(input));
 
 	// predict
@@ -214,16 +217,16 @@ extern "C" __declspec(dllexport) void Cls_ModelPredict(const unsigned char* img,
 	model->Predict(imgs, &results, 1);
 
 	*category_id = results[0].clas_result->category_id;
-	// 拷贝输出类别结果到输出上返回 -- string --> char* 
+	// Copy output category result to output -- string --> char*
 	memcpy(category, results[0].clas_result->category.c_str(), strlen(results[0].clas_result->category.c_str()));
-	// 拷贝输出概率值返回
+	// Copy output probability value
 	*score = results[0].clas_result->score;
-}	
+}
 
 
 /*
-* MaskRCNN推理接口
-* 
+* MaskRCNN Reasoning
+*
 * img: input for predicting.
 *
 * nWidth: width of img.
@@ -237,10 +240,12 @@ extern "C" __declspec(dllexport) void Cls_ModelPredict(const unsigned char* img,
 * mask_output: result of pridict ,include label_map
 *
 * nBoxesNum: result of pridict ,include BoxesNum
-* 
+*
 * LabelList: result of pridict ,include LabelList
+*
+* extern "C"
 */
-extern "C" __declspec(dllexport) void Mask_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, float* box_output, unsigned char* mask_output, int* nBoxesNum, char* LabelList)
+extern "C" void Mask_ModelPredict(const unsigned char* img, int nWidth, int nHeight, int nChannel, float* box_output, unsigned char* mask_output, int* nBoxesNum, char* LabelList)
 {
 	// prepare data
 	std::vector<cv::Mat> imgs;
@@ -260,28 +265,28 @@ extern "C" __declspec(dllexport) void Mask_ModelPredict(const unsigned char* img
 	memcpy(input.data, img, nHeight * nWidth * nChannel * sizeof(uchar));
 	imgs.push_back(std::move(input));
 
-	// predict  -- 多次点击单张推理时会出错
+	// predict
 	std::vector<PaddleDeploy::Result> results;
-	model->Predict(imgs, &results, 1);  // 在Infer处发生错误
+	model->Predict(imgs, &results, 1);
 
-	nBoxesNum[0] = results[0].det_result->boxes.size();  // 得到单张图片预测的bounding box数
+	nBoxesNum[0] = results[0].det_result->boxes.size();  // Get the predicted Bounding Box number of a single image
 	std::string label = "";
 
-	for (int i = 0; i < results[0].det_result->boxes.size(); i++)  // 得到所有框的数据
+	for (int i = 0; i < results[0].det_result->boxes.size(); i++)  // Get the data for all the boxes
 	{
-		// 边界框预测结果
+		// prediction results
 		label = label + results[0].det_result->boxes[i].category + " ";
 		// labelindex
-		box_output[i * 6 + 0] = results[0].det_result->boxes[i].category_id; // 类别的id
+		box_output[i * 6 + 0] = results[0].det_result->boxes[i].category_id; // Category ID
 		// score
-		box_output[i * 6 + 1] = results[0].det_result->boxes[i].score;  // 得分
+		box_output[i * 6 + 1] = results[0].det_result->boxes[i].score;  // Score
 		//// box
 		box_output[i * 6 + 2] = results[0].det_result->boxes[i].coordinate[0]; // x1, y1, x2, y2
-		box_output[i * 6 + 3] = results[0].det_result->boxes[i].coordinate[1]; // 左上、右下的顶点
+		box_output[i * 6 + 3] = results[0].det_result->boxes[i].coordinate[1]; // Upper left and lower right vertices
 		box_output[i * 6 + 4] = results[0].det_result->boxes[i].coordinate[2];
 		box_output[i * 6 + 5] = results[0].det_result->boxes[i].coordinate[3];
-		
-		//Mask预测结果
+
+		// Mask prediction results
 		for (int j = 0; j < results[0].det_result->boxes[i].mask.data.size(); j++)
 		{
 			if (mask_output[j] == 0)
@@ -296,10 +301,12 @@ extern "C" __declspec(dllexport) void Mask_ModelPredict(const unsigned char* img
 
 
 /*
-* 模型销毁/注销接口
+* Model destruction API
+*
+* extern "C"
 */
-extern "C" __declspec(dllexport) void DestructModel()
+extern "C" void DestructModel()
 {
 	delete model;
 	std::cout << "destruct model success" << std::endl;
-}
+}

+ 1 - 1
deploy/cpp/docs/demo/decrypt_infer.md

@@ -1,4 +1,4 @@
-# 模型加密预测示例
+# 模型加密预测
 
 本文档说明如何对模型进行加密解密部署,仅供用户参考进行使用,开发者可基于此demo示例进行二次开发,满足集成的需求。
 

+ 33 - 0
deploy/cpp/docs/demo/model_infer.md

@@ -0,0 +1,33 @@
+# PaddleX Deployment部署方式
+
+PaddleX Deployment适配业界常用的CPU、GPU(包括NVIDIA Jetson)、树莓派等硬件,支持[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)、[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)、[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)三个套件的训练的部署,支持用户采用OpenVINO或TensorRT进行推理加速。完备支持工业最常使用的Windows系统,且提供C#语言进行部署的方式!
+## 模型套件支持
+本目录下代码,目前支持以下飞桨官方套件基于PaddleInference的部署。用户可参考[文件夹结构](./file_format.md)了解模型导出前后文件夹状态。
+
+| 套件名称 | 版本号   | 支持模型 |
+| -------- | -------- | ------- |
+| PaddleDetection  | [release/2.1](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1)、[release/0.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/0.5) |  FasterRCNN / MaskRCNN / PPYOLO / PPYOLOv2 / YOLOv3   |  
+| PaddleSeg        | [release/2.1](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1)       |  全部分割模型  |
+| PaddleClas       | [release/2.1](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1)      |  全部分类模型  |
+| PaddleX          | [release/2.0.0](https://github.com/PaddlePaddle/PaddleX)                        |  全部静态图、动态图模型   |
+
+## 硬件支持
+- CPU(linux/windows)
+- GPU(linux/windows)
+
+## 各套件部署方式说明
+
+- [PaddleX部署指南](./models/paddlex.md)
+- [PaddleDetection部署指南](./models/paddledetection.md)
+- [PaddleSeg部署指南](./models/paddleseg.md)
+- [PaddleClas部署指南](./models/paddleclas.md)
+
+## 模型加密与预测加速
+
+- [模型加密预测示例](./demo/decrypt_infer.md)
+- [PaddleInference集成TensorRT加载模型预测示例](./demo/tensorrt_infer.md)
+
+## <h2 id="1">C++代码预测说明</h2>
+
+- [部署相关API说明](./apis/model.md)
+- [模型配置文件说明](./apis/yaml.md)

+ 9 - 0
deploy/cpp/docs/file_format.md

@@ -0,0 +1,9 @@
+# 各套件模型导出前后文件夹状态
+
+| 套件名称 | 导出前文件 | 导出后文件 |
+| :-- | :-- | :-- |
+| PaddleX静态图 | model.pdparams<br>model.pdmodel<br>model.yml | __ model__<br>__ params__<br>model.yml |
+| PaddleX动态图 | model.pdparams<br>model.pdopt<br>model.yml | model.pdmodel<br>model.pdiparams<br>model.pdiparams.info<br>model.yml<br>pipeline.yml |
+| PaddleDetection | XXX.pdparams |  infer_cfg.yml<br>model.pdiparams<br>model.pdiparams.info<br>model.pdmodel |
+| PaddleSeg | XXX.pdparams | deploy.yaml<br>model.pdiparams<br>model.pdiparams.info<br>model.pdmodel |
+| PaddleClas | XXX.pdparams | model.pdiparams<br>model.pdiparams.info<br>model.pdmodel |

二进制
deploy/cpp/docs/images/cmakelist_set.png


二进制
deploy/cpp/docs/images/cpu_infer.png


二进制
deploy/cpp/docs/images/deploy_build_sh.png


二进制
deploy/cpp/docs/images/infer_demo_cmakelist.png


二进制
deploy/cpp/docs/images/paddleinference_filelist.png


二进制
deploy/cpp/docs/images/show_menu.png


二进制
deploy/cpp/docs/images/tensorrt.png


+ 3 - 3
deploy/cpp/docs/manufacture_sdk/README.md

@@ -1,6 +1,6 @@
 # PaddleClas模型部署
 
-当前支持PaddleClas release/2.1分支导出的模型进行部署。本文档以ResNet50模型为例,讲述从release-2.1分支导出模型并用PaddleX 进行cpp部署整个流程。 PaddleClas相关详细文档可以查看[官网文档](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/README_cn.md)
+当前支持PaddleClas release/2.1分支导出的模型进行部署。本文档以ResNet50模型为例,讲述从release/2.1分支导出模型并用PaddleX 进行cpp部署整个流程。 PaddleClas相关详细文档可以查看[官网文档](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/README_cn.md)
 
 
 
@@ -57,8 +57,8 @@ ResNet50
 参考编译文档
 
 - [Linux系统上编译指南](../compile/paddle/linux.md)
-- [Windows系统上编译指南](../compile/paddle/windows.md)
-
+- [Windows系统上编译指南(生成exe)](../compile/paddle/windows.md)
+- [Windows系统上编译指南(生成dll供C#调用)](../csharp_deploy/)
 
 
 ## 步骤三 模型预测

+ 2 - 1
deploy/cpp/docs/models/paddledetection.md

@@ -50,7 +50,8 @@ yolov3_darknet
 参考编译文档
 
 - [Linux系统上编译指南](../compile/paddle/linux.md)
-- [Windows系统上编译指南](../compile/paddle/windows.md)
+- [Windows系统上编译指南(生成exe)](../compile/paddle/windows.md)
+- [Windows系统上编译指南(生成dll供C#调用)](../csharp_deploy/)
 
 
 

+ 3 - 2
deploy/cpp/docs/models/paddleseg.md

@@ -1,6 +1,6 @@
 # PaddleSeg模型部署
 
-当前支持PaddleSeg release/2.1分支训练的模型进行导出及部署。本文档以[Deeplabv3P](https://github.com/PaddlePaddle/PaddleSeg/blob/release/v2.0/configs/deeplabv3p)模型为例,讲述从release-2.1版本导出模型并进行cpp部署整个流程。 PaddleSeg相关详细文档查看[官网文档](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/README_CN.md)
+当前支持PaddleSeg release/2.1分支训练的模型进行导出及部署。本文档以[Deeplabv3P](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/configs/deeplabv3p)模型为例,讲述从release/2.1版本导出模型并进行cpp部署整个流程。 PaddleSeg相关详细文档查看[官网文档](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/README_CN.md)
 
 ## 步骤一 部署模型导出
 
@@ -39,7 +39,8 @@ output
 参考编译文档
 
 - [Linux系统上编译指南](../compile/paddle/linux.md)
-- [Windows系统上编译指南](../compile/paddle/windows.md)
+- [Windows系统上编译指南(生成exe)](../compile/paddle/windows.md)
+- [Windows系统上编译指南(生成dll供C#调用)](../csharp_deploy)
 
 ## 步骤三 模型预测
 

+ 4 - 5
deploy/cpp/docs/models/paddlex.md

@@ -5,7 +5,7 @@
 
 ## 步骤一 部署模型导出
 
-请参考[PaddlX模型导出文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
+请参考[PaddleX模型导出文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/export_model.md)
 
 
 ## 步骤二 编译
@@ -13,8 +13,8 @@
 参考编译文档
 
 - [Linux系统上编译指南](../compile/paddle/linux.md)
-- [Windows系统上编译指南](../compile/paddle/windows.md)
-
+- [Windows系统上编译指南(生成exe)](../compile/paddle/windows.md)
+- [Windows系统上编译指南(生成dll供C#调用)](../csharp_deploy)
 
 ## 步骤三 模型预测
 
@@ -54,5 +54,4 @@ Classify(809    sunscreen   0.939211)
 
 - [单卡加载模型预测示例](../demo/model_infer.md)
 - [多卡加载模型预测示例](../demo/multi_gpu_model_infer.md)
-- [PaddleInference集成TensorRT加载模型预测示例](../../demo/tensorrt_infer.md)
-- [模型加密预测示例](./docs/demo/decrypt_infer.md)
+- [PaddleInference集成TensorRT加载模型预测示例](../demo/tensorrt_infer.md)

+ 39 - 0
deploy/cpp/scripts/jetson_build.sh

@@ -0,0 +1,39 @@
+# 是否使用GPU(即是否使用 CUDA)
+WITH_GPU=ON
+# 使用MKL or openblas
+WITH_MKL=OFF
+# 是否集成 TensorRT(仅WITH_GPU=ON 有效)
+WITH_PADDLE_TENSORRT=OFF
+# TensorRT 的路径,如果需要集成TensorRT,需修改为您实际安装的TensorRT路径
+TENSORRT_DIR=$(pwd)/TensorRT/
+# Paddle 预测库路径, 请修改为您实际安装的预测库路径
+PADDLE_DIR=$(pwd)/paddle_inference
+# Paddle 的预测库是否使用静态库来编译
+# 使用TensorRT时,Paddle的预测库通常为动态库
+WITH_STATIC_LIB=OFF
+# CUDA 的 lib 路径
+CUDA_LIB=/usr/local/cuda/lib64
+# CUDNN 的 lib 路径
+CUDNN_LIB=/usr/lib/aarch64-linux-gnu
+# 是否加密
+WITH_ENCRYPTION=OFF
+# OPENSSL 路径
+OPENSSL_DIR=$(pwd)/deps/openssl-1.1.0k
+
+
+# 以下无需改动
+rm -rf build
+mkdir -p build
+cd build
+sudo cmake .. \
+    -DWITH_GPU=${WITH_GPU} \
+    -DWITH_MKL=${WITH_MKL} \
+    -DWITH_PADDLE_TENSORRT=${WITH_PADDLE_TENSORRT} \
+    -DTENSORRT_DIR=${TENSORRT_DIR} \
+    -DPADDLE_DIR=${PADDLE_DIR} \
+    -DWITH_STATIC_LIB=${WITH_STATIC_LIB} \
+    -DCUDA_LIB=${CUDA_LIB} \
+    -DCUDNN_LIB=${CUDNN_LIB} \
+    -DWITH_ENCRYPTION=${WITH_ENCRYPTION} \
+    -DOPENSSL_DIR=${OPENSSL_DIR}
+make -j8

+ 6 - 0
deploy/python/README.md

@@ -0,0 +1,6 @@
+# Python部署
+
+- 本地部署
+  - [Python部署](../../docs/python_deploy.md)
+- 服务化部署
+  - [HubServing部署(Python)](../../docs/hub_serving_deploy.md)

+ 0 - 1026
deploy/resources/resnet50_imagenet.yml

@@ -1,1026 +0,0 @@
-model_format: Paddle
-toolkit: PaddleClas
-toolkit_version: unknown
-input_tensor_name: inputs
-transforms:
-  BGR2RGB: ~
-  ResizeByShort:
-    target_size: 256
-    interp: 1
-    use_scale: false
-  CenterCrop:
-    width: 224
-    height: 224
-  Convert:
-    dtype: float
-  Normalize:
-    mean:
-      - 0.485
-      - 0.456
-      - 0.406
-    std:
-      - 0.229
-      - 0.224
-      - 0.225
-  Permute: ~
-labels:
-  - kit_fox
-  - English_setter
-  - Siberian_husky
-  - Australian_terrier
-  - English_springer
-  - grey_whale
-  - lesser_panda
-  - Egyptian_cat
-  - ibex
-  - Persian_cat
-  - cougar
-  - gazelle
-  - porcupine
-  - sea_lion
-  - malamute
-  - badger
-  - Great_Dane
-  - Walker_hound
-  - Welsh_springer_spaniel
-  - whippet
-  - Scottish_deerhound
-  - killer_whale
-  - mink
-  - African_elephant
-  - Weimaraner
-  - soft-coated_wheaten_terrier
-  - Dandie_Dinmont
-  - red_wolf
-  - Old_English_sheepdog
-  - jaguar
-  - otterhound
-  - bloodhound
-  - Airedale
-  - hyena
-  - meerkat
-  - giant_schnauzer
-  - titi
-  - three-toed_sloth
-  - sorrel
-  - black-footed_ferret
-  - dalmatian
-  - black-and-tan_coonhound
-  - papillon
-  - skunk
-  - Staffordshire_bullterrier
-  - Mexican_hairless
-  - Bouvier_des_Flandres
-  - weasel
-  - miniature_poodle
-  - Cardigan
-  - malinois
-  - bighorn
-  - fox_squirrel
-  - colobus
-  - tiger_cat
-  - Lhasa
-  - impala
-  - coyote
-  - Yorkshire_terrier
-  - Newfoundland
-  - brown_bear
-  - red_fox
-  - Norwegian_elkhound
-  - Rottweiler
-  - hartebeest
-  - Saluki
-  - grey_fox
-  - schipperke
-  - Pekinese
-  - Brabancon_griffon
-  - West_Highland_white_terrier
-  - Sealyham_terrier
-  - guenon
-  - mongoose
-  - indri
-  - tiger
-  - Irish_wolfhound
-  - wild_boar
-  - EntleBucher
-  - zebra
-  - ram
-  - French_bulldog
-  - orangutan
-  - basenji
-  - leopard
-  - Bernese_mountain_dog
-  - Maltese_dog
-  - Norfolk_terrier
-  - toy_terrier
-  - vizsla
-  - cairn
-  - squirrel_monkey
-  - groenendael
-  - clumber
-  - Siamese_cat
-  - chimpanzee
-  - komondor
-  - Afghan_hound
-  - Japanese_spaniel
-  - proboscis_monkey
-  - guinea_pig
-  - white_wolf
-  - ice_bear
-  - gorilla
-  - borzoi
-  - toy_poodle
-  - Kerry_blue_terrier
-  - ox
-  - Scotch_terrier
-  - Tibetan_mastiff
-  - spider_monkey
-  - Doberman
-  - Boston_bull
-  - Greater_Swiss_Mountain_dog
-  - Appenzeller
-  - Shih-Tzu
-  - Irish_water_spaniel
-  - Pomeranian
-  - Bedlington_terrier
-  - warthog
-  - Arabian_camel
-  - siamang
-  - miniature_schnauzer
-  - collie
-  - golden_retriever
-  - Irish_terrier
-  - affenpinscher
-  - Border_collie
-  - hare
-  - boxer
-  - silky_terrier
-  - beagle
-  - Leonberg
-  - German_short-haired_pointer
-  - patas
-  - dhole
-  - baboon
-  - macaque
-  - Chesapeake_Bay_retriever
-  - bull_mastiff
-  - kuvasz
-  - capuchin
-  - pug
-  - curly-coated_retriever
-  - Norwich_terrier
-  - flat-coated_retriever
-  - hog
-  - keeshond
-  - Eskimo_dog
-  - Brittany_spaniel
-  - standard_poodle
-  - Lakeland_terrier
-  - snow_leopard
-  - Gordon_setter
-  - dingo
-  - standard_schnauzer
-  - hamster
-  - Tibetan_terrier
-  - Arctic_fox
-  - wire-haired_fox_terrier
-  - basset
-  - water_buffalo
-  - American_black_bear
-  - Angora
-  - bison
-  - howler_monkey
-  - hippopotamus
-  - chow
-  - giant_panda
-  - American_Staffordshire_terrier
-  - Shetland_sheepdog
-  - Great_Pyrenees
-  - Chihuahua
-  - tabby
-  - marmoset
-  - Labrador_retriever
-  - Saint_Bernard
-  - armadillo
-  - Samoyed
-  - bluetick
-  - redbone
-  - polecat
-  - marmot
-  - kelpie
-  - gibbon
-  - llama
-  - miniature_pinscher
-  - wood_rabbit
-  - Italian_greyhound
-  - lion
-  - cocker_spaniel
-  - Irish_setter
-  - dugong
-  - Indian_elephant
-  - beaver
-  - Sussex_spaniel
-  - Pembroke
-  - Blenheim_spaniel
-  - Madagascar_cat
-  - Rhodesian_ridgeback
-  - lynx
-  - African_hunting_dog
-  - langur
-  - Ibizan_hound
-  - timber_wolf
-  - cheetah
-  - English_foxhound
-  - briard
-  - sloth_bear
-  - Border_terrier
-  - German_shepherd
-  - otter
-  - koala
-  - tusker
-  - echidna
-  - wallaby
-  - platypus
-  - wombat
-  - revolver
-  - umbrella
-  - schooner
-  - soccer_ball
-  - accordion
-  - ant
-  - starfish
-  - chambered_nautilus
-  - grand_piano
-  - laptop
-  - strawberry
-  - airliner
-  - warplane
-  - airship
-  - balloon
-  - space_shuttle
-  - fireboat
-  - gondola
-  - speedboat
-  - lifeboat
-  - canoe
-  - yawl
-  - catamaran
-  - trimaran
-  - container_ship
-  - liner
-  - pirate
-  - aircraft_carrier
-  - submarine
-  - wreck
-  - half_track
-  - tank
-  - missile
-  - bobsled
-  - dogsled
-  - bicycle-built-for-two
-  - mountain_bike
-  - freight_car
-  - passenger_car
-  - barrow
-  - shopping_cart
-  - motor_scooter
-  - forklift
-  - electric_locomotive
-  - steam_locomotive
-  - amphibian
-  - ambulance
-  - beach_wagon
-  - cab
-  - convertible
-  - jeep
-  - limousine
-  - minivan
-  - Model_T
-  - racer
-  - sports_car
-  - go-kart
-  - golfcart
-  - moped
-  - snowplow
-  - fire_engine
-  - garbage_truck
-  - pickup
-  - tow_truck
-  - trailer_truck
-  - moving_van
-  - police_van
-  - recreational_vehicle
-  - streetcar
-  - snowmobile
-  - tractor
-  - mobile_home
-  - tricycle
-  - unicycle
-  - horse_cart
-  - jinrikisha
-  - oxcart
-  - bassinet
-  - cradle
-  - crib
-  - four-poster
-  - bookcase
-  - china_cabinet
-  - medicine_chest
-  - chiffonier
-  - table_lamp
-  - file
-  - park_bench
-  - barber_chair
-  - throne
-  - folding_chair
-  - rocking_chair
-  - studio_couch
-  - toilet_seat
-  - desk
-  - pool_table
-  - dining_table
-  - entertainment_center
-  - wardrobe
-  - Granny_Smith
-  - orange
-  - lemon
-  - fig
-  - pineapple
-  - banana
-  - jackfruit
-  - custard_apple
-  - pomegranate
-  - acorn
-  - hip
-  - ear
-  - rapeseed
-  - corn
-  - buckeye
-  - organ
-  - upright
-  - chime
-  - drum
-  - gong
-  - maraca
-  - marimba
-  - steel_drum
-  - banjo
-  - cello
-  - violin
-  - harp
-  - acoustic_guitar
-  - electric_guitar
-  - cornet
-  - French_horn
-  - trombone
-  - harmonica
-  - ocarina
-  - panpipe
-  - bassoon
-  - oboe
-  - sax
-  - flute
-  - daisy
-  - yellow_lady's_slipper
-  - cliff
-  - valley
-  - alp
-  - volcano
-  - promontory
-  - sandbar
-  - coral_reef
-  - lakeside
-  - seashore
-  - geyser
-  - hatchet
-  - cleaver
-  - letter_opener
-  - plane
-  - power_drill
-  - lawn_mower
-  - hammer
-  - corkscrew
-  - can_opener
-  - plunger
-  - screwdriver
-  - shovel
-  - plow
-  - chain_saw
-  - cock
-  - hen
-  - ostrich
-  - brambling
-  - goldfinch
-  - house_finch
-  - junco
-  - indigo_bunting
-  - robin
-  - bulbul
-  - jay
-  - magpie
-  - chickadee
-  - water_ouzel
-  - kite
-  - bald_eagle
-  - vulture
-  - great_grey_owl
-  - black_grouse
-  - ptarmigan
-  - ruffed_grouse
-  - prairie_chicken
-  - peacock
-  - quail
-  - partridge
-  - African_grey
-  - macaw
-  - sulphur-crested_cockatoo
-  - lorikeet
-  - coucal
-  - bee_eater
-  - hornbill
-  - hummingbird
-  - jacamar
-  - toucan
-  - drake
-  - red-breasted_merganser
-  - goose
-  - black_swan
-  - white_stork
-  - black_stork
-  - spoonbill
-  - flamingo
-  - American_egret
-  - little_blue_heron
-  - bittern
-  - crane
-  - limpkin
-  - American_coot
-  - bustard
-  - ruddy_turnstone
-  - red-backed_sandpiper
-  - redshank
-  - dowitcher
-  - oystercatcher
-  - European_gallinule
-  - pelican
-  - king_penguin
-  - albatross
-  - great_white_shark
-  - tiger_shark
-  - hammerhead
-  - electric_ray
-  - stingray
-  - barracouta
-  - coho
-  - tench
-  - goldfish
-  - eel
-  - rock_beauty
-  - anemone_fish
-  - lionfish
-  - puffer
-  - sturgeon
-  - gar
-  - loggerhead
-  - leatherback_turtle
-  - mud_turtle
-  - terrapin
-  - box_turtle
-  - banded_gecko
-  - common_iguana
-  - American_chameleon
-  - whiptail
-  - agama
-  - frilled_lizard
-  - alligator_lizard
-  - Gila_monster
-  - green_lizard
-  - African_chameleon
-  - Komodo_dragon
-  - triceratops
-  - African_crocodile
-  - American_alligator
-  - thunder_snake
-  - ringneck_snake
-  - hognose_snake
-  - green_snake
-  - king_snake
-  - garter_snake
-  - water_snake
-  - vine_snake
-  - night_snake
-  - boa_constrictor
-  - rock_python
-  - Indian_cobra
-  - green_mamba
-  - sea_snake
-  - horned_viper
-  - diamondback
-  - sidewinder
-  - European_fire_salamander
-  - common_newt
-  - eft
-  - spotted_salamander
-  - axolotl
-  - bullfrog
-  - tree_frog
-  - tailed_frog
-  - whistle
-  - wing
-  - paintbrush
-  - hand_blower
-  - oxygen_mask
-  - snorkel
-  - loudspeaker
-  - microphone
-  - screen
-  - mouse
-  - electric_fan
-  - oil_filter
-  - strainer
-  - space_heater
-  - stove
-  - guillotine
-  - barometer
-  - rule
-  - odometer
-  - scale
-  - analog_clock
-  - digital_clock
-  - wall_clock
-  - hourglass
-  - sundial
-  - parking_meter
-  - stopwatch
-  - digital_watch
-  - stethoscope
-  - syringe
-  - magnetic_compass
-  - binoculars
-  - projector
-  - sunglasses
-  - loupe
-  - radio_telescope
-  - bow
-  - cannon
-  - assault_rifle
-  - rifle
-  - projectile
-  - computer_keyboard
-  - typewriter_keyboard
-  - crane
-  - lighter
-  - abacus
-  - cash_machine
-  - slide_rule
-  - desktop_computer
-  - hand-held_computer
-  - notebook
-  - web_site
-  - harvester
-  - thresher
-  - printer
-  - slot
-  - vending_machine
-  - sewing_machine
-  - joystick
-  - switch
-  - hook
-  - car_wheel
-  - paddlewheel
-  - pinwheel
-  - potter's_wheel
-  - gas_pump
-  - carousel
-  - swing
-  - reel
-  - radiator
-  - puck
-  - hard_disc
-  - sunglass
-  - pick
-  - car_mirror
-  - solar_dish
-  - remote_control
-  - disk_brake
-  - buckle
-  - hair_slide
-  - knot
-  - combination_lock
-  - padlock
-  - nail
-  - safety_pin
-  - screw
-  - muzzle
-  - seat_belt
-  - ski
-  - candle
-  - jack-o'-lantern
-  - spotlight
-  - torch
-  - neck_brace
-  - pier
-  - tripod
-  - maypole
-  - mousetrap
-  - spider_web
-  - trilobite
-  - harvestman
-  - scorpion
-  - black_and_gold_garden_spider
-  - barn_spider
-  - garden_spider
-  - black_widow
-  - tarantula
-  - wolf_spider
-  - tick
-  - centipede
-  - isopod
-  - Dungeness_crab
-  - rock_crab
-  - fiddler_crab
-  - king_crab
-  - American_lobster
-  - spiny_lobster
-  - crayfish
-  - hermit_crab
-  - tiger_beetle
-  - ladybug
-  - ground_beetle
-  - long-horned_beetle
-  - leaf_beetle
-  - dung_beetle
-  - rhinoceros_beetle
-  - weevil
-  - fly
-  - bee
-  - grasshopper
-  - cricket
-  - walking_stick
-  - cockroach
-  - mantis
-  - cicada
-  - leafhopper
-  - lacewing
-  - dragonfly
-  - damselfly
-  - admiral
-  - ringlet
-  - monarch
-  - cabbage_butterfly
-  - sulphur_butterfly
-  - lycaenid
-  - jellyfish
-  - sea_anemone
-  - brain_coral
-  - flatworm
-  - nematode
-  - conch
-  - snail
-  - slug
-  - sea_slug
-  - chiton
-  - sea_urchin
-  - sea_cucumber
-  - iron
-  - espresso_maker
-  - microwave
-  - Dutch_oven
-  - rotisserie
-  - toaster
-  - waffle_iron
-  - vacuum
-  - dishwasher
-  - refrigerator
-  - washer
-  - Crock_Pot
-  - frying_pan
-  - wok
-  - caldron
-  - coffeepot
-  - teapot
-  - spatula
-  - altar
-  - triumphal_arch
-  - patio
-  - steel_arch_bridge
-  - suspension_bridge
-  - viaduct
-  - barn
-  - greenhouse
-  - palace
-  - monastery
-  - library
-  - apiary
-  - boathouse
-  - church
-  - mosque
-  - stupa
-  - planetarium
-  - restaurant
-  - cinema
-  - home_theater
-  - lumbermill
-  - coil
-  - obelisk
-  - totem_pole
-  - castle
-  - prison
-  - grocery_store
-  - bakery
-  - barbershop
-  - bookshop
-  - butcher_shop
-  - confectionery
-  - shoe_shop
-  - tobacco_shop
-  - toyshop
-  - fountain
-  - cliff_dwelling
-  - yurt
-  - dock
-  - brass
-  - megalith
-  - bannister
-  - breakwater
-  - dam
-  - chainlink_fence
-  - picket_fence
-  - worm_fence
-  - stone_wall
-  - grille
-  - sliding_door
-  - turnstile
-  - mountain_tent
-  - scoreboard
-  - honeycomb
-  - plate_rack
-  - pedestal
-  - beacon
-  - mashed_potato
-  - bell_pepper
-  - head_cabbage
-  - broccoli
-  - cauliflower
-  - zucchini
-  - spaghetti_squash
-  - acorn_squash
-  - butternut_squash
-  - cucumber
-  - artichoke
-  - cardoon
-  - mushroom
-  - shower_curtain
-  - jean
-  - carton
-  - handkerchief
-  - sandal
-  - ashcan
-  - safe
-  - plate
-  - necklace
-  - croquet_ball
-  - fur_coat
-  - thimble
-  - pajama
-  - running_shoe
-  - cocktail_shaker
-  - chest
-  - manhole_cover
-  - modem
-  - tub
-  - tray
-  - balance_beam
-  - bagel
-  - prayer_rug
-  - kimono
-  - hot_pot
-  - whiskey_jug
-  - knee_pad
-  - book_jacket
-  - spindle
-  - ski_mask
-  - beer_bottle
-  - crash_helmet
-  - bottlecap
-  - tile_roof
-  - mask
-  - maillot
-  - Petri_dish
-  - football_helmet
-  - bathing_cap
-  - teddy
-  - holster
-  - pop_bottle
-  - photocopier
-  - vestment
-  - crossword_puzzle
-  - golf_ball
-  - trifle
-  - suit
-  - water_tower
-  - feather_boa
-  - cloak
-  - red_wine
-  - drumstick
-  - shield
-  - Christmas_stocking
-  - hoopskirt
-  - menu
-  - stage
-  - bonnet
-  - meat_loaf
-  - baseball
-  - face_powder
-  - scabbard
-  - sunscreen
-  - beer_glass
-  - hen-of-the-woods
-  - guacamole
-  - lampshade
-  - wool
-  - hay
-  - bow_tie
-  - mailbag
-  - water_jug
-  - bucket
-  - dishrag
-  - soup_bowl
-  - eggnog
-  - mortar
-  - trench_coat
-  - paddle
-  - chain
-  - swab
-  - mixing_bowl
-  - potpie
-  - wine_bottle
-  - shoji
-  - bulletproof_vest
-  - drilling_platform
-  - binder
-  - cardigan
-  - sweatshirt
-  - pot
-  - birdhouse
-  - hamper
-  - ping-pong_ball
-  - pencil_box
-  - pay-phone
-  - consomme
-  - apron
-  - punching_bag
-  - backpack
-  - groom
-  - bearskin
-  - pencil_sharpener
-  - broom
-  - mosquito_net
-  - abaya
-  - mortarboard
-  - poncho
-  - crutch
-  - Polaroid_camera
-  - space_bar
-  - cup
-  - racket
-  - traffic_light
-  - quill
-  - radio
-  - dough
-  - cuirass
-  - military_uniform
-  - lipstick
-  - shower_cap
-  - monitor
-  - oscilloscope
-  - mitten
-  - brassiere
-  - French_loaf
-  - vase
-  - milk_can
-  - rugby_ball
-  - paper_towel
-  - earthstar
-  - envelope
-  - miniskirt
-  - cowboy_hat
-  - trolleybus
-  - perfume
-  - bathtub
-  - hotdog
-  - coral_fungus
-  - bullet_train
-  - pillow
-  - toilet_tissue
-  - cassette
-  - carpenter's_kit
-  - ladle
-  - stinkhorn
-  - lotion
-  - hair_spray
-  - academic_gown
-  - dome
-  - crate
-  - wig
-  - burrito
-  - pill_bottle
-  - chain_mail
-  - theater_curtain
-  - window_shade
-  - barrel
-  - washbasin
-  - ballpoint
-  - basketball
-  - bath_towel
-  - cowboy_boot
-  - gown
-  - window_screen
-  - agaric
-  - cellular_telephone
-  - nipple
-  - barbell
-  - mailbox
-  - lab_coat
-  - fire_screen
-  - minibus
-  - packet
-  - maze
-  - pole
-  - horizontal_bar
-  - sombrero
-  - pickelhaube
-  - rain_barrel
-  - wallet
-  - cassette_player
-  - comic_book
-  - piggy_bank
-  - street_sign
-  - bell_cote
-  - fountain_pen
-  - Windsor_tie
-  - volleyball
-  - overskirt
-  - sarong
-  - purse
-  - bolo_tie
-  - bib
-  - parachute
-  - sleeping_bag
-  - television
-  - swimming_trunks
-  - measuring_cup
-  - espresso
-  - pizza
-  - breastplate
-  - shopping_basket
-  - wooden_spoon
-  - saltshaker
-  - chocolate_sauce
-  - ballplayer
-  - goblet
-  - gyromitra
-  - stretcher
-  - water_bottle
-  - dial_telephone
-  - soap_dispenser
-  - jersey
-  - school_bus
-  - jigsaw_puzzle
-  - plastic_bag
-  - reflex_camera
-  - diaper
-  - Band_Aid
-  - ice_lolly
-  - velvet
-  - tennis_ball
-  - gasmask
-  - doormat
-  - Loafer
-  - ice_cream
-  - pretzel
-  - quilt
-  - maillot
-  - tape_player
-  - clog
-  - iPod
-  - bolete
-  - scuba_diver
-  - pitcher
-  - matchstick
-  - bikini
-  - sock
-  - CD_player
-  - lens_cap
-  - thatch
-  - vault
-  - beaker
-  - bubble
-  - cheeseburger
-  - parallel_bars
-  - flagpole
-  - coffee_mug
-  - rubber_eraser
-  - stole
-  - carbonara
-  - dumbbell

+ 1 - 0
docs/CHANGELOG.md

@@ -26,6 +26,7 @@ PaddleX RESTful是基于PaddleX开发的RESTful API。对于开发者来说只
 <div align="center">
 <img src="../images/1.png"  width = "500" />              </div>
 
+<a name="快速使用"></a>
 ## *如何快速使用PaddleX_Restful API 快速搭建私有化训练云平台*
 
 在该示例中PaddleX_Restful运行在一台带GPU的linux服务器下,用户通过其他电脑连接该服务器进行远程的操作。

二进制
docs/Resful_API/images/1.png


二进制
docs/apis/images/test.jpg


+ 10 - 20
docs/apis/images/yolo_predict.jpg


+ 7 - 17
docs/data/annotation/README.md

@@ -1,6 +1,12 @@
 # 数据标注
 
-## 手机拍照图片旋转
+### 用户可根据任务种类查看标注文档
+- [图像分类数据标注](classification.md)
+- [目标检测数据标注](object_detection.md)
+- [实例分割数据标注](instance_segmentation.md)
+- [语义分割数据标注](semantic_segmentation.md)
+
+### 手机拍照图片旋转
 
 当您收集的样本图像来源于手机拍照时,请注意由于手机拍照信息内附带水平垂直方向信息,这可能会使得在标注和训练时出现问题,因此在拍完照后注意根据方向对照片进行处理,使用如下函数即可解决
 ```python
@@ -25,19 +31,3 @@ im = Image.open(img_file)
 rotate(im)
 im.save('new_1.jpeg')
 ```
-
-## 图像分类数据标注
-
-详见文档[图像分类数据标注](classification.md)
-
-## 目标检测数据标注
-
-详见文档[目标检测数据标注](object_detection.md)
-
-## 实例分割数据标注
-
-详见文档[实例分割数据标注](instance_segmentation.md)
-
-## 语义分割数据标注
-
-详见文档[语义分割数据标注](semantic_segmentation.md)

+ 1 - 1
docs/data/annotation/classification.md

@@ -4,7 +4,7 @@ LabelMe可用于标注目标检测、实例分割、语义分割数据集,是
 
 ## 1. 安装Anaconda
 
-推荐使用Anaconda安装python依赖,有经验的开发者可以跳过此步骤。安装Anaconda的方式可以参考[文档](../../../../docs/appendix/anaconda_install.md)。
+推荐使用Anaconda安装python依赖,有经验的开发者可以跳过此步骤。安装Anaconda的方式可以参考[文档](../../../docs/appendix/anaconda_install.md)。
 
 在安装Anaconda,并创建环境之后,再进行接下来的步骤
 

+ 1 - 1
docs/data/annotation/object_detection.md

@@ -1,4 +1,4 @@
-# 数据格式说明
+# 数据格式说明及数据集加载
 
 请根据具体任务查看相应的数据格式说明文档:
 

+ 3 - 3
docs/data/format/classification.md

@@ -15,7 +15,7 @@ paddlex --split_dataset --format ImageNet --dataset_dir MyDataset --val_value 0.
 划分好的数据集会额外生成`labels.txt`, `train_list.txt`, `val_list.txt`, `test_list.txt`四个文件,之后可直接进行训练。
 
 
-- [图像分类任务训练示例代码](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/mobilenetv2.py)
+- [图像分类任务训练示例代码](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/image_classification/shufflenetv2.py)
 
 ## 目标检测
 
@@ -27,7 +27,7 @@ paddlex --split_dataset --format VOC --dataset_dir D:\MyDataset --val_value 0.2
 执行上面命令行,会在`D:\MyDataset`下生成`labels.txt`, `train_list.txt`, `val_list.txt`和`test_list.txt`,分别存储类别信息,训练样本列表,验证样本列表,测试样本列表
 
 
-- [目标检测任务训练示例代码](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/yolov3_mobilenetv1.py)
+- [目标检测任务训练示例代码](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/object_detection/yolov3_darknet53.py)
 
 ## 实例分割
 
@@ -50,4 +50,4 @@ paddlex --split_dataset --format SEG --dataset_dir D:\MyDataset --val_value 0.2
 执行上面命令行,会在`D:\MyDataset`下生成`train_list.txt`, `val_list.txt`, `test_list.txt`,分别存储训练样本信息,验证样本信息,测试样本信息
 
 
-- [语义分割任务训练示例代码](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/deeplabv3p_xception65.py)
+- [语义分割任务训练示例代码](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/semantic_segmentation/deeplabv3p_resnet50_vd.py)

+ 20 - 0
docs/gui/README.md

@@ -0,0 +1,20 @@
+# PaddleX可视化客户端介绍
+
+PaddleX可视化客户端基于PaddleX开发的可视化深度学习模型训练套件,目前支持训练视觉领域的图像分类、目标检测、实例分割和语义分割四大任务。开发者以点选、键入的方式快速体验深度学习模型开发的全流程。可以作为您提升深度学习模型开发效率的工具。
+
+PaddleX GUI 当前提供Windows,Mac,Ubuntu三种版本一键绿色安装的方式。请至飞桨官网:https://www.paddlepaddle.org.cn/paddle/paddleX 下载您需要的版本。
+
+## 功能
+PaddleX可视化客户端是PaddleX API的衍生品,它在集成API功能的基础上,额外提供了可视化分析、评估等附加功能,致力于为开发者带来极致顺畅的开发体验。其拥有以下独特的功能:
+- **全流程打通**:PaddleX GUI覆盖深度学习模型开发必经的 **数据处理** 、 **超参配置** 、 **模型训练及优化** 、 **模型发布** 全流程,无需开发一行代码,即可得到高性能深度学习推理模型。
+- **数据集智能分析**:详细的数据结构说明,并提供 **数据标签自动校验** 。支持 **可视化数据预览** 、 **数据分布图表展示** 、 **一键数据集切分** 等实用功能
+- **自动超参推荐**:集成飞桨团队长时间产业实践经验,根据用户选择的模型类别、骨架网络等,提供多种针对性优化的 **预训练模型** ,并 **提供推荐超参配置** ,可 **一键开启多种优化策略**
+- **可视化模型评估**:集成 **可视化分析工具:VisualDL** , 以线性图表的形式展示精度、学习率等关键参数在训练过程中的变化趋势。提供 **混淆矩阵** 等实用方法,帮助快速定位问题,加速调参。模型评估报告一键导出,方便项目复盘分析。
+- **模型裁剪及量化**:一键启动模型裁剪、量化,在不同阶段为开发者提供模型优化的策略,满足不同环境对模型性能的需求。
+- **预训练模型管理**:可对历史训练模型进行保存及管理,未进行裁剪的模型可以保存为预训练模型,在后续任务中使用。
+- **可视化模型测试**:客户端直接展示模型预测效果,无需上线即可进行效果评估
+- **模型多端部署**:点选式选择模型发布平台、格式,一键导出预测模型,并匹配完善的模型预测部署说明文档,贴心助力产业端到端项目落地
+
+## 安装使用
+- 请参考[安装方式文档](./download.md)下载安装PaddleX可视化客户端。
+- 请参考[快速开始](./quick_start.md)查看[视频教程](./quick_start.md/#视频教程)和[文档教程](./quick_start.md/#文档教程)。

+ 0 - 20
docs/gui/download.md

@@ -1,20 +0,0 @@
-# PaddleX可视化客户端
-
-感谢使用PaddleX可视化客户端,通过本客户端,您可以实现图像分类、目标检测、实例分割和语义分割四大视觉任务模型的训练,裁剪及量化,以及模型在移动端/服务端的发布。
-
-## 可视化客户端使用
-
-- [PaddleX可视化客户端使用](./introduce.md)
-- [PaddleX模型部署](https://github.com/PaddlePaddle/PaddleX#5-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
-
-## Python API使用
-除可视化客户端,PaddleX还提供了更灵活的API方式进行模型训练、裁剪、量化,以及一系列的模型效果分析接口,欢迎有需求的开发者使用。
-
-- [10分钟上手API训练模型](../quick_start.md)
-- [更多模型训练示例](../../tutorials/train/README.md)
-
-## 支持PaddleX
-
-当前GitHub Repo即为PaddleX的开源项目地址,**欢迎大家star(本页面右上角哦)支持PaddleX的开发者!**
-
-个人开发者或企业用户有相关需求及建议,欢迎在此Github上提ISSUE,或邮件至paddlex@baidu.com。

二进制
docs/gui/how_to_use.md


二进制
docs/gui/images/QR2.png


+ 0 - 40
docs/gui/introduce.md

@@ -1,40 +0,0 @@
-# 介绍
-
-PaddleX可视化客户端基于PaddleX开发的可视化深度学习模型训练套件,目前支持训练视觉领域的图像分类、目标检测、实例分割和语义分割四大任务,同时支持模型裁剪、模型量化两种方式压缩模型。开发者以点选、键入的方式快速体验深度学习模型开发的全流程。可以作为您提升深度学习模型开发效率的工具。
-
-PaddleX GUI 当前提供Windows,Mac,Ubuntu三种版本一键绿色安装的方式。请至飞桨官网:https://www.paddlepaddle.org.cn/paddle/paddleX 下载您需要的版本。
-
-## 功能
-PaddleX可视化客户端是PaddleX API的衍生品,它在集成API功能的基础上,额外提供了可视化分析、评估等附加功能,致力于为开发者带来极致顺畅的开发体验。其拥有以下独特的功能:
-
-### 全流程打通
-
-PaddleX GUI覆盖深度学习模型开发必经的 **数据处理** 、 **超参配置** 、 **模型训练及优化** 、 **模型发布** 全流程,无需开发一行代码,即可得到高性能深度学习推理模型。
-
-### 数据集智能分析
-
-详细的数据结构说明,并提供 **数据标签自动校验** 。支持 **可视化数据预览** 、 **数据分布图表展示** 、 **一键数据集切分** 等实用功能
-
-### 自动超参推荐
-
-集成飞桨团队长时间产业实践经验,根据用户选择的模型类别、骨架网络等,提供多种针对性优化的 **预训练模型** ,并 **提供推荐超参配置** ,可 **一键开启多种优化策略**
-
-### 可视化模型评估
-
-集成 **可视化分析工具:VisualDL** , 以线性图表的形式展示acc、lr等关键参数在训练过程中的变化趋势。提供 **混淆矩阵** 等实用方法,帮助快速定位问题,加速调参。模型评估报告一键导出,方便项目复盘分析。
-
-### 模型裁剪及量化
-
-一键启动模型裁剪、量化,在不同阶段为开发者提供模型优化的策略,满足不同环境对模型性能的需求。
-
-### 预训练模型管理
-
-可对历史训练模型进行保存及管理,未进行裁剪的模型可以保存为预训练模型,在后续任务中使用。
-
-### 可视化模型测试
-
-客户端直接展示模型预测效果,无需上线即可进行效果评估
-
-### 模型多端部署
-
-点选式选择模型发布平台、格式,一键导出预测模型,并匹配完善的模型预测部署说明文档,贴心助力产业端到端项目落地

+ 1 - 1
docs/gui/restful/data_struct.md

@@ -140,7 +140,7 @@ if __name__ == '__main__':
 ```
 使用的测试图片如下:
 
-![](./apis/images/test.jpeg)
+![](./apis/images/test.jpg)
 
 将代码中的`IMAGE_PATH1`改成想要进行预测的图片路径后,在命令行执行:
 ```commandline

+ 0 - 84
docs/install.md

@@ -1,84 +0,0 @@
-# 快速安装PaddleX
-
-## 目录
-
-* [1. PaddleX API开发模式安装](#1)
-* [2. PadldeX GUI开发模式安装](#2)
-* [3. PaddleX Restful开发模式安装](#3)
-
-
-**PaddleX提供三种开发模式,满足用户的不同需求:**
-
-## <h2 id="1">1. PaddleX API开发模式安装</h2>
-
-通过简洁易懂的Python API,在兼顾功能全面性、开发灵活性、集成方便性的基础上,给开发者最流畅的深度学习开发体验。<br>
-
-以下安装过程默认用户已安装好**paddlepaddle-gpu或paddlepaddle(版本大于或等于2.1.0)**,paddlepaddle安装方式参照[飞桨官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/develop/install/pip/windows-pip.html)
-
-
-### PaddleX 2.0.0安装
-
-#### * Linux / macOS 操作系统
-
-使用pip安装方式安装2.0.0版本:
-
-```commandline
-pip install paddlex==2.0.0 -i https://mirror.baidu.com/pypi/simple
-```
-
-因PaddleX依赖pycocotools包,如遇到pycocotools安装失败,可参照如下方式安装pycocotools:
-
-```commandline
-pip install cython  
-pip install pycocotools
-```
-
-**我们推荐大家先安装Anacaonda,而后在新建的conoda环境中使用上述pip安装方式**。Anaconda是一个开源的Python发行版本,其包含了conda、Python等180多个科学包及其依赖项。使用Anaconda可以通过创建多个独立的Python环境,避免用户的Python环境安装太多不同版本依赖导致冲突。参考[Anaconda安装PaddleX文档](./appendix/anaconda_install.md)
-
-#### * Windows 操作系统
-
-
-使用pip安装方式安装2.0.0版本:
-
-```commandline
-pip install paddlex==2.0.0 -i https://mirror.baidu.com/pypi/simple
-```
-
-因PaddleX依赖pycocotools包,Windows安装时可能会提示`Microsoft Visual C++ 14.0 is required`,从而导致安装出错,[点击下载VC build tools](https://go.microsoft.com/fwlink/?LinkId=691126)安装再执行如下pip命令
-> 注意:安装完后,需要重新打开新的终端命令窗口
-
-```commandline
-pip install cython
-pip install git+https://gitee.com/jiangjiajun/philferriere-cocoapi.git#subdirectory=PythonAPI
-```
-
-### PaddleX develop安装
-
-github代码会跟随开发进度不断更新,可以安装develop分支的代码使用最新的功能,安装方式如下:
-
-```commandline
-git clone https://github.com/PaddlePaddle/PaddleX.git
-cd PaddleX
-git checkout develop
-pip install -r requirements.txt
-python setup.py install
-```
-
-如遇到pycocotools安装失败,参考[PaddleX 2.0.0安装](./install.md#paddlex-200安装)中介绍的解决方法。
-
-## <h2 id="2">2. PadldeX GUI开发模式安装</h2>
-
-
-   无代码开发的可视化客户端,应用PaddleX API实现,使开发者快速进行产业项目验证,并为用户开发自有深度学习软件/应用提供参照。
-
-- 前往[PaddleX官网](https://www.paddlepaddle.org.cn/paddle/paddlex),申请下载PaddleX GUI一键绿色安装包。
-
-- 前往[PaddleX GUI使用教程](./gui/how_to_use.md)了解PaddleX GUI使用详情。
-
-- [PaddleX GUI安装环境说明](./gui/download.md)
-
-
-## <h2 id="3">3. PaddleX Restful开发模式安装</h2>
-
-使用基于RESTful API开发的GUI与Web Demo实现远程的深度学习全流程开发;同时开发者也可以基于RESTful API开发个性化的可视化界面
-- 前往[PaddleX RESTful API使用教程](./Resful_API/docs/readme.md)  

二进制
docs/paddlex.png


二进制
docs/parameters.md


+ 1 - 1
docs/python_deploy.md

@@ -32,7 +32,7 @@ result = predictor.predict(img_file='test.jpg',
 ```
 
 * **预测结果可视化**
- 
+
 Python部署所得预测结果支持使用`paddlex.det.visualize`(适用于目标检测和实例分割模型)或`paddlex.seg.visualize`(适用于语义分割模型)进行可视化。
 ```python
 # 目标检测和实例分割结果

+ 78 - 35
docs/quick_start.md → docs/quick_start_API.md

@@ -1,42 +1,95 @@
-# 10分钟快速上手使用
+# PaddleX API开发模式快速上手
+通过简洁易懂的Python API,在兼顾功能全面性、开发灵活性、集成方便性的基础上,给开发者最流畅的深度学习开发体验。
 
 ## 目录
-* [前置说明](#1)
-  * [PaddleX的模型训练](#11)
-  * [PaddleX的其他用法](#12)
-* [使用示例](#2)
-  * <a href=#安装PaddleX>安装PaddleX</a>
-  * <a href=#准备蔬菜分类数据集>准备蔬菜分类数据集</a>
-  * <a href=#定义训练验证图像处理流程transforms>定义训练/验证图像处理流程transforms</a>
-  * <a href=#定义dataset加载图像分类数据集>定义dataset加载图像分类数据集</a>
-  * <a href=#使用MoibleNetV3_small模型开始训练>使用MoibleNetV3_small模型开始训练</a>
-  * <a href=#训练过程使用VisualDL查看训练指标变化>训练过程使用VisualDL查看训练指标变化</a>
-  * <a href=加载训练保存的模型预测>加载训练保存的模型预测</a>
-* [更多使用教程](#3)
+- [快速安装](#快速安装)
+    - [PaddleX 2.0.0安装](#PaddleX-200安装)
+    - [PaddleX develop安装](#PaddleX-develop安装)
+- [使用前置说明](#使用前置说明)
+    - [PaddleX的模型训练](#PaddleX的模型训练)
+    - [PaddleX的其他用法](#PaddleX的其他用法)
+- [使用示例](#使用示例)
+    - <a href=#安装PaddleX>安装PaddleX</a>
+    - <a href=#准备蔬菜分类数据集>准备蔬菜分类数据集</a>
+    - <a href=#定义训练验证图像处理流程transforms>定义训练/验证图像处理流程transforms</a>
+    - <a href=#定义dataset加载图像分类数据集>定义dataset加载图像分类数据集</a>
+    - <a href=#使用MoibleNetV3_small模型开始训练>使用MoibleNetV3_small模型开始训练</a>
+    - <a href=#训练过程使用VisualDL查看训练指标变化>训练过程使用VisualDL查看训练指标变化</a>
+    - <a href=#加载训练保存的模型预测>加载训练保存的模型预测</a>
+
+## 快速安装
+以下安装过程默认用户已安装好**paddlepaddle-gpu或paddlepaddle(版本大于或等于2.1.2)**,paddlepaddle安装方式参照[飞桨官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/release/2.0.0/install/pip/windows-pip.html)
+
+### PaddleX 2.0.0安装
+**我们推荐大家先安装Anacaonda,而后在新建的conoda环境中使用上述pip安装方式**。Anaconda是一个开源的Python发行版本,其包含了conda、Python等180多个科学包及其依赖项。使用Anaconda可以通过创建多个独立的Python环境,避免用户的Python环境安装太多不同版本依赖导致冲突。参考[Anaconda安装PaddleX文档](./appendix/anaconda_install.md)
+
+- Linux / macOS 操作系统
+
+使用pip安装方式安装2.0.0版本:
 
+```commandline
+pip install paddlex==2.0.0 -i https://mirror.baidu.com/pypi/simple
+```
+
+paddlepaddle已集成pycocotools包,但也有pycocotools无法随paddlepaddle成功安装的情况。因PaddleX依赖pycocotools包,如遇到pycocotools安装失败,可参照如下方式安装pycocotools:
+
+```commandline
+pip install cython  
+pip install pycocotools
+```
+
+- Windows 操作系统
+使用pip安装方式安装2.0.0版本:
+
+```commandline
+pip install paddlex==2.0.0 -i https://mirror.baidu.com/pypi/simple
+```
+
+因PaddleX依赖pycocotools包,Windows安装时可能会提示`Microsoft Visual C++ 14.0 is required`,从而导致安装出错,[点击下载VC build tools](https://go.microsoft.com/fwlink/?LinkId=691126)安装再执行如下pip命令
+> 注意:安装完后,需要重新打开新的终端命令窗口
+
+```commandline
+pip install cython
+pip install git+https://gitee.com/jiangjiajun/philferriere-cocoapi.git#subdirectory=PythonAPI
+```
+
+### PaddleX develop安装
+
+github代码会跟随开发进度不断更新,可以安装develop分支的代码使用最新的功能,安装方式如下:
+
+```commandline
+git clone https://github.com/PaddlePaddle/PaddleX.git
+cd PaddleX
+git checkout develop
+pip install -r requirements.txt
+python setup.py install
+```
+
+如遇到pycocotools安装失败,参考[PaddleX 2.0.0安装](./install.md#paddlex-200安装)中介绍的解决方法。
 
-## <h2 id="1">前置说明</h2>
+## 使用前置说明
 
-### <h3 id="11">PaddleX的模型训练</h3>
+### PaddleX的模型训练
 
 跟随以下3个步骤,即可快速完成训练代码开发:
 
 | 步骤 |                  |说明             |
 | :--- | :--------------- | :-------------- |
-| 第1步| <a href=#定义训练验证图像处理流程transforms>定义transforms</a>  | 用于定义模型训练、验证、预测过程中,<br>输入图像的预处理和数据增强操作 |
-| 第2步| <a href="#定义dataset加载图像分类数据集">定义datasets</a>  | 用于定义模型要加载的训练、验证数据集 |
-| 第3步| <a href="#使用MoibleNetV3_small_ssld模型开始训练">定义模型开始训练</a> | 选择需要的模型,进行训练 |
+| 第1步| <a href="#准备蔬菜分类数据集">准备数据集</a>  | 用于训练网络 |
+| 第2步| <a href="#定义训练验证图像处理流程transforms">定义transforms</a>  | 用于定义模型训练、验证、预测过程中,<br>输入图像的预处理和数据增强操作 |
+| 第3步| <a href="#定义dataset加载图像分类数据集">定义datasets</a>  | 用于定义模型要加载的训练、验证数据集 |
+| 第4步| <a href="#使用MoibleNetV3_small模型开始训练">定义模型开始训练</a> | 选择需要的模型,进行训练 |
 
 > **注意**:不同模型的transforms、datasets和训练参数都有较大差异。可直接根据[模型训练教程](../tutorials/train)获取更多模型的训练代码。
 
-### <h3 id="12">PaddleX的其它用法</h3>
+### PaddleX的其它用法
 
 - <a href="#训练过程使用VisualDL查看训练指标变化">使用VisualDL查看训练过程中的指标变化</a>
 - <a href="#加载训练保存的模型预测">加载训练保存的模型进行预测</a>
 
-## <h2 id="2">使用示例</h2>
+## 使用示例
 
-接下来展示如何通过PaddleX在一个小数据集上进行训练。示例代码源于Github [tutorials/train/image_classification/mobilenetv3_small.py](../tutorials/train/image_classification/mobilenetv3_small.py),用户可自行下载至本地运行。  
+接下来展示如何通过PaddleX在一个小数据集上进行训练。示例代码源于Github [tutorials/train/image_classification/mobilenetv3_small.py](../tutorials/train/image_classification/mobilenetv3_small.py),用户可自行下载至本地运行。用户也可前往[AIStudio在线项目示例](https://aistudio.baidu.com/aistudio/projectdetail/2159977)学习体验。
 
 <a name="安装PaddleX"></a>
 **1. 安装PaddleX**  
@@ -89,9 +142,6 @@ eval_dataset = pdx.datasets.ImageNet(
     transforms=eval_transforms)
 ```
 
-- [paddlex.datasets.ImageNet接口说明](./apis/datasets.md#1)
-- [ImageNet数据格式说明](./data/format/classification.md)
-
 <a name="使用MoibleNetV3_small模型开始训练"></a>
 **5. 使用MobileNetV3_small模型开始训练**  
 
@@ -133,15 +183,8 @@ print("Predict Result: ", result)
 ```
 预测结果输出如下,
 ```
-Predict Result: Predict Result: [{'score': 0.9999393, 'category': 'bocai', 'category_id': 0}]
+Predict Result:  [{'category_id': 0, 'category': 'bocai', 'score': 0.99960476}]
 ```
-- [load_model接口说明](./apis/prediction.md)
-- [分类模型predict接口说明](./apis/models/classification.md#predict)
-
-
-<h2 id="3">更多使用教程</h2>
-
-- 1.[目标检测模型训练](../tutorials/train)
-- 2.[语义分割模型训练](../tutorials/train)
-- 3.[实例分割模型训练](../tutorials/train)
-- 4.[模型太大,想要更小的模型,试试模型裁剪吧!](../tutorials/slim/prune)
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/134845112-2330eab5-a2af-4e54-bda1-b5bb8e0c11be.png" width="400"  />
+</p>

+ 158 - 0
docs/quick_start_GUI.md

@@ -0,0 +1,158 @@
+# PaddleX GUI开发模式快速上手
+感谢使用PaddleX可视化客户端,通过本客户端,您可以实现图像分类、目标检测、实例分割和语义分割四大视觉任务模型的训练,裁剪及量化,以及模型在移动端/服务端的发布。
+
+## 目录
+- [快速安装](#快速安装)
+  - [下载安装](#下载安装)
+  - [安装推荐环境](#安装推荐环境)
+- [视频教程](#视频教程)
+- [文档教程](#文档教程)
+  - [启动客户端](#1启动客户端)
+  - [准备和导入数据](#2准备和导入数据)
+  - [创建项目和任务](#3创建项目和任务)
+  - [任务模型训练](#4任务模型训练)
+  - [任务模型裁剪训练](#5任务模型裁剪训练)
+  - [模型效果评估](#6模型效果评估)
+  - [模型发布](#7模型发布)
+
+## 快速安装
+### 下载安装
+下载地址:https://www.paddlepaddle.org.cn/paddlex
+目前最新版本的GUI(Version 2.0.0)仅提供WIN和Linux版,暂未提供Mac版,若需在Mac上使用GUI,推荐安装Mac版历史版本Version 1.1.7
+- 特别说明:GUI 2.0要求CUDA >=11.0, cuDNN >= 8.0
+- WIN版下载后双击选择安装路径即可
+- Mac/Linux版下载后解压即可
+
+***注:安装/解压路径请务必在不包含中文和空格的路径下,否则可能会导致无法正确训练模型***
+
+### 安装推荐环境
+
+- **操作系统**:
+  * Windows 10
+  * Mac OS 10.13+
+  * Ubuntu 18.04(Ubuntu暂只支持18.04)
+
+***注:处理器需为x86_64架构,支持MKL。***
+
+- **训练硬件**:  
+  * **GPU**(仅Windows及Linux系统):  
+    推荐使用支持CUDA的NVIDIA显卡,例如:GTX 1070+以上性能的显卡
+    Windows系统X86_64驱动版本>=411.31
+    Linux系统X86_64驱动版本>=410.48
+    显存8G以上
+  * **CPU**:PaddleX当前支持您用本地CPU进行训练,但推荐使用GPU以获得更好的开发体验。
+  * **内存**:建议8G以上  
+  * **硬盘空间**:建议SSD剩余空间1T以上(非必须)  
+
+***注:PaddleX在Mac OS系统只支持CPU训练。Windows系统只支持单GPU卡训练。***
+
+## 视频教程
+用户可观看[图像分类](https://www.bilibili.com/video/BV1nK411F7J9?from=search&seid=3068181839691103009)、[目标检测](https://www.bilibili.com/video/BV1HB4y1A73b?from=search&seid=3068181839691103009)、[语义分割](https://www.bilibili.com/video/BV1qQ4y1Z7co?from=search&seid=3068181839691103009)、[实例分割](https://www.bilibili.com/video/BV1M44y1r7s6?from=search&seid=3068181839691103009)视频教程,并通过PaddleX可视化客户端完成四类任务。
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/134846471-4d5bcf96-216e-4419-a8b8-5d07fa05c884.png" width="800" />
+</p>
+
+## 文档教程
+### 1.启动客户端
+如果系统是Mac OS 10.15.5及以上,在双击客户端icon后,需要在Terminal中执行 ```sudo xattr -r -d com.apple.quarantine /Users/username/PaddleX``` ,并稍等几秒来启动客户端,其中 /Users/username/PaddleX 为您保存PaddleX的文件夹路径*
+
+其他系统,直接双击客户端icon即可启动客户端
+
+### 2.准备和导入数据
+- 准备数据
+  - 在开始模型训练前,用户需要根据不同的任务类型,将数据标注为相应的格式。目前PaddleX支持【图像分类】、【目标检测】、【语义分割】、【实例分割】四种任务类型。  
+  - 开发者可以参考PaddleX使用文档中的[数据标注](./data/annotation)来进行数据标注和转换工作。如若开发者自行准备数据,请注意数据格式与PaddleX支持四种数据格式是否一致。
+
+- 导入数据集
+
+  ①数据标注完成后,需要根据不同的任务,将数据和标注文件,按照客户端提示更名并保存到正确的文件中。
+
+  ②在客户端新建数据集,选择与数据集匹配的任务类型,并选择数据集对应的路径,将数据集导入。
+
+  <p align="center">
+    <img src="https://user-images.githubusercontent.com/53808988/133880285-2e29646a-89e0-4f97-a675-4586d7469216.jpg" width="800" />
+  </p>
+
+  ③选定导入数据集后,客户端会自动校验数据及标注文件是否合规,校验成功后,您可根据实际需求,将数据集按比例划分为训练集、验证集、测试集。
+
+  ④您可在「数据分析」模块按规则预览您标注的数据集,双击单张图片可放大查看。
+
+  <p align="center">
+    <img src="https://user-images.githubusercontent.com/53808988/133880292-93d2f76b-1402-44bb-b84b-3a9ebc7c67c6.jpg" width="800" />
+  </p>
+
+### 3.创建项目和任务
+
+- 创建项目
+
+  ①在完成数据导入后,您可以点击「新建项目」创建一个项目。
+
+  ②您可根据实际任务需求选择项目的任务类型,需要注意项目所采用的数据集也带有任务类型属性,两者需要进行匹配。
+  <p align="center">
+    <img src="https://user-images.githubusercontent.com/53808988/133880340-1da23b7c-249d-4175-b98e-62fbff9a1f7b.jpg" width="800" />
+  </p>
+- 项目开发
+
+  ①数据选择:项目创建完成后,您需要选择已载入客户端并校验后的数据集,并点击下一步,进入参数配置页面。
+  <p align="center">
+    <img src="https://user-images.githubusercontent.com/53808988/133880374-157bc44a-6f64-45c5-bb3f-3608b8e85026.jpg" width="800" />
+  </p>
+
+  ②参数配置:主要分为**模型参数**、**训练参数**、**优化策略**三部分。您可根据实际需求选择模型结构、骨架网络及对应的训练参数、优化策略,使得任务效果最佳。
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880390-3b97b772-2f7d-47bc-af9f-5943bca45177.jpg" width="800" />
+</p>
+
+### 4.任务模型训练
+
+参数配置完成后,点击启动训练,模型开始训练并进行效果评估。
+
+- 训练可视化:在训练过程中,您可通过VisualDL查看模型训练过程参数变化、日志详情,及当前最优的训练集和验证集训练指标。模型在训练过程中通过点击"中止训练"随时中止训练过程。
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880453-e3fb6399-8545-44d7-9086-61889aa07d89.jpg" width="800" />
+</p>
+
+- 模型训练结束后,可选择进入『模型剪裁分析』或者直接进入『模型评估』。
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880456-1c8bfa3b-757f-4927-b107-d3bb3bfcf529.jpg" width="800" />
+</p>
+
+> 模型训练是最容易出错的步骤,经常遇到的原因为 电脑无法联网下载预训练模型、显存不够。训练检测模型\实例分割模型对于显存要求较高,**建议用户通过在Windows/Mac/Ubuntu的命令行终端(Windows的cmd命令终端)执行`nvidia-smi`命令**查看显存情况,请不要使用系统自带的任务管理器查看。  
+
+### 5.任务模型裁剪训练
+
+此步骤可选,模型裁剪训练相对比普通的任务模型训练,需要消耗更多的时间,需要在正常任务模型训练的基础上,增加『**模型裁剪分析**』和『**模型裁剪训练**』两个步骤。  
+
+裁剪过程将对模型各卷积层的敏感度信息进行分析,根据各参数对模型效果的影响进行不同比例的裁剪,再进行精调训练获得最终裁剪后的模型。  
+裁剪训练后的模型体积,计算量都会减少,并且可以提升模型在低性能设备的预测速度,如移动端,边缘设备,CPU。
+
+在可视化客户端上,**用户训练好模型后**,在训练界面,
+- 首先,点击『模型裁剪分析』,此过程将会消耗较长的时间
+- 接着,点击『开始模型裁剪训练』,客户端会创建一个新的任务,无需修改参数,直接再启动训练即可
+
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880459-40cb4eeb-ce8e-40b3-8e75-7dda4544b116.jpg" width="800" />
+</p>
+
+### 6.模型效果评估
+
+在模型评估页面,您可查看训练后的模型效果。评估方法包括混淆矩阵、精度、召回率等。
+
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880511-5ff88ea7-69e2-4b88-bb27-13fc32268991.jpg" width="800" />
+</p>
+
+您还可以选择『数据集切分』时留出的『测试数据集』或从本地文件夹中导入一张/多张图片,将训练后的模型进行测试。根据测试结果,您可决定是否将训练完成的模型保存为预训练模型并进入模型发布页面,或返回先前步骤调整参数配置重新进行训练。
+
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880513-385923a4-4abf-41b2-a97f-06c757c36ccf.jpg" width="800" />
+</p>
+
+### 7.模型发布
+
+当模型效果满意后,您可根据实际的生产环境需求,选择将模型发布为需要的版本。  
+如若要部署到移动端/边缘设备,对于部分支持量化的模型,还可以根据需求选择是否量化。量化可以压缩模型体积,提升预测速度
+
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/53808988/133880541-680b0db0-5b30-4806-8c1a-0eb05a68c70b.jpg" width="800" />
+</p>

+ 1 - 0
docs/quick_start_Resful_API.md

@@ -0,0 +1 @@
+建设中,敬请期待

二进制
examples/C#_deploy/images/26.png


二进制
examples/C#_deploy/images/27.png


二进制
examples/C#_deploy/images/28.png


二进制
examples/C#_deploy/images/29.png


+ 0 - 2
examples/README.md

@@ -8,5 +8,3 @@
 * [缺陷检测](./defect_detection)
 
 * [工业表计检测](./meter_reader)
-
-* [Windows系统下使用C#语言部署](./C%23_deploy)

+ 1 - 1
examples/defect_detection/README.md

@@ -13,7 +13,7 @@
 <img src="./images/lens.png"  width = "1000" /
 >              </div>
 
-更多数据格式信息请参考[数据标注说明文档](https://paddlex.readthedocs.io/zh_CN/develop/data/annotation/index.html)
+更多数据格式信息请参考[数据标注说明文档](./../../docs/data/annotation/README.md)
 * **数据切分**
 将训练集、验证集和测试集按照7:2:1的比例划分。
 ``` shell

+ 201 - 0
examples/helmet_detection/LICENSE

@@ -0,0 +1,201 @@
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

+ 203 - 0
examples/helmet_detection/README.md

@@ -0,0 +1,203 @@
+# 安全帽检测
+
+> 基于PaddleX API 2.0 开发
+
+## 1.项目说明
+
+在该项目中,主要向大家介绍如何使用目标检测来实现对安全帽的检测,涉及代码以及优化过程亦可用于其它目标检测任务等。
+
+在施工现场,对于来往人员,以及工作人员而言,安全问题至关重要。而安全帽更是保障施工现场在场人员安全的第一防线,因此需要对场地中的人员进行安全提醒。当人员未佩戴安全帽进入施工场所时,人为监管耗时耗力,而且不易实时监管,过程繁琐、消耗人力且实时性较差。针对上述问题,希望通过**视频监控->目标检测->智能督导**的方式智能、高效的完成此任务:
+
+<div align="center">
+<img src="./images/1.png"  width = "320" />
+<img src="./images/2.png"  width = "320" /></div>
+
+
+**业务难点:**
+
+- **精度要求高** 由于涉及安全问题,需要精度非常高才能保证对施工场所人员的安全督导。需要专门针对此目标的检测算法进行优化,另外,还需要处理拍摄角度、光线不完全受控,安全帽显示不全、可能存在遮挡等情况。
+- **小目标检测** 由于实际使用过程中,人员里镜头较远,因此需要模型对小目标的检测有较低的漏检率。
+
+<div align="center">
+<img src="./images/3.jpg"  width = "1024" /></div>
+
+
+
+
+## 2.数据准备
+
+数据集中包含了5000张已经标注好的数据。该项目采用目标检测的标注方式,在本文档中提供了VOC数据集格式。[点击此处前往下载数据集](https://aistudio.baidu.com/aistudio/datasetdetail/50329)
+
+**下载后的数据集文件夹需要更改一下命名:**
+
+```
+dataset/                          dataset/
+  ├── annotations/      -->         ├── Annotations/
+  ├── images/                       ├── JPEGImages/
+```
+
+数据集分类情况: **`head` , `helmet`, `person`.**
+
+更多数据格式信息请参考[数据标注说明文档](./../../docs/data/annotation/README.md)
+
+- **数据切分** 将训练集和验证集按照8.5:1.5的比例划分。 PaddleX中提供了简单易用的API,方便用户直接使用进行数据划分。
+
+```
+paddlex --split_dataset --format VOC --dataset_dir dataset --val_value 0.15
+```
+
+```
+dataset/                          dataset/
+  ├── Annotations/      -->         ├── Annotations/
+  ├── JPEGImages/                   ├── JPEGImages/
+                                    ├── labels.txt
+                                    ├── train_list.txt
+                                    ├── val_list.txt
+```
+
+
+
+## 3.模型选择
+
+PaddleX提供了丰富的视觉模型,在目标检测中提供了RCNN和YOLO系列模型。在本项目中采用YOLO作为检测模型进行安全帽检测。
+
+## 4. 模型训练
+
+
+
+在本项目中,采用YOLOV3作为安全帽检测的基线模型,以COCO指标作为评估指标。具体代码请参考[train.py](./code/train.py)
+
+运行如下代码开始训练模型:
+
+```
+python code/train.py
+```
+
+若输入如下代码,则可在log文件中查看训练日志,log文件保存在`code`目标下
+
+```
+python code/train.py > log
+```
+
+- 训练过程说明
+
+<div align="center">
+<img src="./images/4.png"  width = "1024" /></div>
+
+
+## 5.模型优化(进阶)
+
+- 精度提升 为了进一步提升模型的精度,可以通过**coco_error_analysis**,具体请参考[模型优化分析文档](./accuracy_improvement.md)
+
+
+
+采用PaddleX在Tesla V100上测试模型的推理时间(输入数据拷贝至GPU的时间、计算时间、数据拷贝至CPU的时间),推理时间如下表所示:(十次推理取平均耗时)
+
+| 模型                                                         | 推理时间 (ms/image) | map(Iou-0.5) | (coco)mmap | 安全帽AP(Iou-0.5) |
+| ------------------------------------------------------------ | :-------------------: | ------------ | :--------: | :---------------: |
+| baseline: YOLOv3 + DarkNet53 + cluster_yolo_anchor + img_size(480) |         50.34         | 61.6         |    39.2    |       94.58       |
+| YOLOv3 + ResNet50_vd_dcn + cluster_yolo_anchor+img_size(480) |         53.81         | 61.7         |    39.1    |       95.35       |
+| **PPYOLO + ResNet50_vd_dcn + iou_aware + img_size(480)**     |         72.88         | **62.4**     |    37.7    |     **95.73**     |
+| PPYOLO + ResNet50_vd_dcn + cluster_yolo_anchor + img_size(480) |         67.14         | 61.8         |    39.8    |       95.08       |
+| **PPYOLOV2 + ResNet50_vd_dcn + img_size(608)**               |         81.52         | 61.6         |  **41.3**  |       95.32       |
+| PPYOLOV2 + ResNet101_vd_dcn + img_size(608)                  |        106.62         | 61.3         |    40.6    |       95.15       |
+|                                                              |                       |              |            |                   |
+
+注意:
+
+- **608**的图像大小,一般使用默认的anchors进行训练和推理即可。
+- **cluster_yolo_anchor**: 用于生成拟合数据集的模型anchor
+
+```
+anchors = train_dataset.cluster_yolo_anchor(num_anchors=9, image_size=480)
+anchor_masks = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
+```
+
+
+
+**优化进展说明**:
+
+- 1.通过选择**更好的backbone**作为特征提取的骨干网络可以提高识别率、降低漏检率。<**DarkNet53 到 ResNet50_vd_dcn**>
+
+- 2.通过选择更好的检测架构可以提高检测的mmap值——即**Neck,Head部分的优化**可以提高ap。<**YOLOV3 到 PPYOLOV2**>
+
+- 3.缩放适当的图像大小可以提高模型的识别率,但是存在一定的阈值——当图像大小到某一个阈值时会导致精度下降。
+
+  - **一般图像大小选择(YOLO系列)**:320,480, 608。
+  - 一般**图像如果较大,物体也比较大**,可以较为放心的缩小图像大小再进行相关的训练和预测。
+  - 物体较小,不易缩小,**可以适当的裁剪划分原图或放大**,并处理对应的标注数据,再进行训练。
+
+  <**480到608**>
+
+- 4.通过cluster_yolo_anchor生成当前网络输入图像大小下拟合数据集的预置anchors,利用新生成的anchors替换原来的默认anchor,使得模型预测定位上框选**位置更准确**。
+
+- 5.通过PPYOLO两个实验,一个使用**iou_aware**,一个不是使用**iou_aware**而采用聚类得到的**anchor**提高定位能力;分析数据发现在定位信息优化上,**iou_aware**在当前数据集上表现更好,但推理时间也有所提升。
+
+- 通过以上的简单优化方式,获取了两个较好的模型结果:
+
+- | 模型                                           | 推理时间 (ms/image) | map(Iou-0.5) | (coco)mmap | 安全帽AP(Iou-0.5) |
+  | ---------------------------------------------- | :-------------------: | ------------ | :--------: | :---------------: |
+  | **PPYOLO + ResNet50_vd_dcn + img_size(480)**   |         72.88         | **62.4**     |    37.7    |     **95.73**     |
+  | **PPYOLOV2 + ResNet50_vd_dcn + img_size(608)** |         81.52         | 61.6         |  **41.3**  |       95.32       |
+
+## 6.模型预测
+
+
+
+运行如下代码:
+
+```
+python code/infer.py
+```
+
+则可生成result.txt文件并显示预测结果图片,result.txt文件中会显示图片中每个检测框的位置、类别及置信度, 从而实现了安全帽的自动检测。
+
+预测结果如下:
+
+<div align="center">
+<img src="images/5.png"  width = "400" />
+<img src="images/6.png"  width = "400" /></div>
+
+
+## 7.模型导出
+
+模型训练后保存在output文件夹,如果要使用PaddleInference进行部署需要导出成静态图的模型,运行如下命令,会自动在output文件夹下创建一个`inference_model`的文件夹,用来存放导出后的模型。
+
+```
+paddlex --export_inference --model_dir=output/yolov3_darknet53/best_model --save_dir=output/inference_model --fixed_input_shape=[480,480]
+```
+
+**注意**:设定 fixed_input_shape 的数值需与 eval_transforms 中设置的 target_size 数值上保持一致。
+
+## 8.模型上线选择
+
+本案例面向GPU端的最终方案是选择一阶段检测模型PPYOLOV2,其骨干网络选择加入了可变形卷积(DCN)的ResNet50_vd,训练阶段数据增强策略采用RandomHorizontalFlip、RandomDistort、RandomCrop等。
+
+在Tesla V100的Linux系统下,模型的推理时间大约为81.52ms/image,包括transform、输入数据拷贝至GPU的时间、计算时间、数据拷贝至CPU的时间。
+
+| 模型                                       | 推理时间 (ms/image) | map(Iou-0.5) | (coco)mmap | 安全帽AP(Iou-0.5) |
+| ------------------------------------------ | ------------------- | ------------ | :--------: | :---------------: |
+| PPYOLOV2 + ResNet50_vd_dcn + img_size(608) | 81.52               | 61.6         |    41.3    |       95.32       |
+|                                            |                     |              |            |                   |
+
+**上线模型的PR曲线:**
+
+<div align="center">
+<img src="images/8.png"  width = "1024" /></div>
+
+
+在本项目中的安全帽检测数据中,标注信息本身存在一定的缺漏,导致部分类别学习失效。但针对本项目的安全帽检测问题而言,**person(人)这一类别影响不大,因此可以在mmap较大的基础上主要看helmet(安全帽)的精度即可**。通过**COCO的评估指标**,可以使多类别的检测模型的评估更加符合实际应用;虽然我们可以看出在该数据集中,有一个类别对整体的map与mmap有较大影响,但是通过COCO指标能够取得一个相对数据集更**综合表现**(不同Iou尺度下)的一个模型。
+
+**注意**: 通过VOC指标也许能够取得更好的Iou-0.5指标下更好的数据,但是却会使得对多Iou尺度的一个评估,使得得到该评估指标下最好的模型未必在其它Iou尺度下也有最好的表现。
+
+## 9.模型部署方式
+
+
+
+模型部署采用了PaddleX提供的C++ inference部署方案,在该方案中提供了C#部署[Demo](../../deploy/cpp/docs/csharp_deploy),用户可根据实际情况自行参考。
+
+<div align="center">
+<img src="images/14.png"  width = "1024" /></div>
+
+
+

+ 88 - 0
examples/helmet_detection/accuracy_improvement.md

@@ -0,0 +1,88 @@
+# 精度优化思路分析
+
+本小节侧重展示在模型迭代过程中优化精度的思路,在本案例中,有些优化策略获得了精度收益,而有些没有。在其他场景中,可根据实际情况尝试这些优化策略。
+
+## (1) 基线模型选择
+
+相较于二阶段检测模型,单阶段检测模型的精度略低但是速度更快。考虑到是部署到GPU端,本案例选择单阶段检测模型YOLOV3作为基线模型,其骨干网络选择DarNet53。训练完成后,模型在验证集上的精度如下:
+
+| 模型                                                         | 推理时间 (ms/image) | map(Iou-0.5) | (coco)mmap | 安全帽AP(Iou-0.5) |
+| ------------------------------------------------------------ | :-------------------: | ------------ | :--------: | :---------------: |
+| baseline: YOLOv3 + DarkNet53 + cluster_yolo_anchor + img_size(480) |         50.34         | 61.6         |    39.2    |       94.58       |
+
+
+
+## (2) 基线模型效果分析与优化
+
+使用PaddleX提供的[paddlex.det.coco_error_analysis](https://paddlex.readthedocs.io/zh_CN/develop/apis/visualize.html#paddlex-det-coco-error-analysis)接口对模型在验证集上预测错误的原因进行分析,分析结果以图表的形式展示如下:
+
+| allclass                                    | head                                         | person                                       | helmet                                       |
+| ------------------------------------------- | -------------------------------------------- | -------------------------------------------- | -------------------------------------------- |
+| <img src="./images/9.png"  width = "320" /> | <img src="./images/10.png"  width = "320" /> | <img src="./images/12.png"  width = "320" /> | <img src="./images/11.png"  width = "320" /> |
+
+分析图表展示了7条Precision-Recall(PR)曲线,每一条曲线表示的Average Precision (AP)比它左边那条高,原因是逐步放宽了评估要求。以helmet类为例,各条PR曲线的评估要求解释如下:
+
+- C75: 在IoU设置为0.75时的PR曲线, AP为0.681。
+- C50: 在IoU设置为0.5时的PR曲线,AP为0.946。C50与C75之间的白色区域面积代表将IoU从0.75放宽至0.5带来的AP增益。
+- Loc: 在IoU设置为0.1时的PR曲线,AP为0.959。Loc与C50之间的蓝色区域面积代表将IoU从0.5放宽至0.1带来的AP增益。蓝色区域面积越大,表示越多的检测框位置不够精准。
+- Sim: 在Loc的基础上,如果检测框与真值框的类别不相同,但两者同属于一个亚类,则不认为该检测框是错误的,在这种评估要求下的PR曲线, AP为0.961。Sim与Loc之间的红色区域面积越大,表示子类间的混淆程度越高。VOC格式的数据集所有的类别都属于同一个亚类。
+- Oth: 在Sim的基础上,如果检测框与真值框的亚类不相同,则不认为该检测框是错误的,在这种评估要求下的PR曲线,AP为0.961。Oth与Sim之间的绿色区域面积越大,表示亚类间的混淆程度越高。VOC格式的数据集中所有的类别都属于同一个亚类,故不存在亚类间的混淆。
+- BG: 在Oth的基础上,背景区域上的检测框不认为是错误的,在这种评估要求下的PR曲线,AP为0.970。BG与Oth之间的紫色区域面积越大,表示背景区域被误检的数量越多。
+- FN: 在BG的基础上,漏检的真值框不认为是错误的,在这种评估要求下的PR曲线,AP为1.00。FN与BG之间的橙色区域面积越大,表示漏检的真值框数量越多。
+
+从分析图表中可以看出,head、helmet两类检测效果较好,但仍然存在漏检的情况,特别是person存在很大的漏检问题;此外,通过helmet中C75指标可以看出,其相对于C50的0.946而言有些差了,因此定位性能有待进一步提高。为进一步理解造成这些问题的原因,将验证集上的预测结果进行了可视化,然后发现数据集标注存在以下问题:
+
+- 本数据集主要考虑到头部和安全帽的检测,因此在人检测时,有个图片中标注了,而有的图片中没有标注,从而导致学习失效,引发person漏检。
+- head与helmet大多数情况标注较好,但由于部分拍摄角度导致有的图片中的head和helmet发生重叠以及太小导致学习有困难。
+
+考虑到漏检问题,一般是特征学习不够,无法识别出物体,因此基于这个方向,尝试替换backbone: DarNet53 --> ResNet50_vd_dcn,在指标上的提升如下:
+
+| 模型                                                         | 推理时间 (ms/image) | map(Iou-0.5) | (coco)mmap | 安全帽AP(Iou-0.5) |
+| ------------------------------------------------------------ | :-------------------: | ------------ | :--------: | :---------------: |
+| YOLOv3 + ResNet50_vd_dcn + cluster_yolo_anchor+img_size(480) |         53.81         | **61.7**     |    39.1    |     **95.35**     |
+
+考虑到定位问题,通过尝试放大图片,不同的网络结构以及定位的优化策略: 利用`cluster_yolo_anchor`生成聚类的anchor或开启iou_aware。最终得到上线模型PPYOLOV2的精度如下:
+
+| 模型                                           | 推理时间 (ms/image) | map(Iou-0.5) | (coco)mmap | 安全帽AP(Iou-0.5) |
+| ---------------------------------------------- | :-------------------: | ------------ | :--------: | :---------------: |
+| **PPYOLOV2 + ResNet50_vd_dcn + img_size(608)** |         81.52         | 61.6         |  **41.3**  |       95.32       |
+
+其中helmet类误差分析如下图:
+
+<div align="center">
+    <img src="./images/13.png"  width = "640" />
+</div>
+
+
+从分析表中可以看出:
+
+- C75指标效果明显改善,定位更加准确:**从0.681提升到0.742**。
+- 其中BG到FN的差距**从0.03降低到了0.02**,说明漏检情况有所改善。
+- 其中Loc与Sim的差距**从0.002降低到了0.001**,说明混淆程度也下降了。
+- 其中Oth与BG的差距**从0.019降低到了0.015**,说明检测错误下降了。
+
+本项目优化整体分析可归纳为以下几点:
+
+- 通过选用适当更优的骨干网络可以改善漏检的情况,因此漏检方面的优化可以考虑先从骨干网络替换上开始——当然必要的数据清洗也是不可缺少的,要是数据集本身漏标,则会从根本上影响模型的学习。
+- 通过放大图像,可以对一些中小目标的物体检测起到一定的优化作用。
+- 通过聚类anchor以及iou_aware等操作可以提高模型的定位能力,直接体现是在高IoU上也能有更好的表现。【因此,定位不准可以从模型的anchor以及模型的结构上入手进行优化】
+
+# (3) 数据增强选择
+
+|                       训练预处理1(a1)                        |                          验证预处理                          |
+| :----------------------------------------------------------: | :----------------------------------------------------------: |
+|                  MixupImage(mixup_epoch=-1)                  |           Resize(target_size=480, interp='CUBIC')            |
+|                       RandomDistort()                        | Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) |
+|   RandomExpand(im_padding_value=[123.675, 116.28, 103.53])   |                                                              |
+|                         RandomCrop()                         |                                                              |
+|                    RandomHorizontalFlip()                    |                                                              |
+| BatchRandomResize(target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],interp='RANDOM') |                                                              |
+| Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) |                                                              |
+
+在加入了[RandomHorizontalFlip](https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html#randomhorizontalflip)、[RandomDistort](https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html#randomdistort)、[RandomCrop](https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html#randomcrop)、[RandomExpand](https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html#randomexpand)、[BatchRandomResize](https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html#batchrandomresize)、[MixupImage](https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html#mixupimage)这几种数据增强方法后,对模型的优化是有一定的积极作用了,在取消这些预处理后,模型性能会有一定的下降。
+
+**PS**:建议在训练初期都加上这些预处理方法,到后期模型超参数以及相关结构确定最优之后,再进行数据方面的再优化: 比如数据清洗,数据预处理方法筛选等。
+
+
+
+>

+ 47 - 0
examples/helmet_detection/code/infer.py

@@ -0,0 +1,47 @@
+import glob
+import numpy as np
+import threading
+import time
+import random
+import os
+import base64
+import cv2
+import json
+import paddlex as pdx
+
+# 待预测图片路径
+image_name = 'dataset/JPEGImages/hard_hat_workers1049.jpg'
+
+# 预测模型加载
+model = pdx.load_model('output/yolov3_darknet53/best_model')
+
+# 读取图片与获取预测结果
+img = cv2.imread(image_name)
+result = model.predict(img)
+
+# 解析预测结果,并保存到txt中
+keep_results = []
+areas = []
+f = open('result.txt', 'a')
+count = 0
+for dt in np.array(result):
+    cname, bbox, score = dt['category'], dt['bbox'], dt['score']
+    if score < 0.5:
+        continue
+    keep_results.append(dt)
+    count += 1
+    f.write(str(dt) + '\n')
+    f.write('\n')
+    areas.append(bbox[2] * bbox[3])
+areas = np.asarray(areas)
+sorted_idxs = np.argsort(-areas).tolist()
+keep_results = [keep_results[k]
+                for k in sorted_idxs] if len(keep_results) > 0 else []
+print(keep_results)
+print(count)
+f.write("the total number is :" + str(int(count)))
+f.close()
+
+# 可视化保存
+pdx.det.visualize(
+    image_name, result, threshold=0.5, save_dir='./output/yolov3_darknet53')

+ 68 - 0
examples/helmet_detection/code/train.py

@@ -0,0 +1,68 @@
+import numpy as np
+import paddlex as pdx
+from paddlex import transforms as T
+
+# 定义训练和验证时的transforms
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0.0/paddlex/cv/transforms/operators.py
+train_transforms = T.Compose([
+    T.MixupImage(mixup_epoch=-1), T.RandomDistort(),
+    T.RandomExpand(im_padding_value=[123.675, 116.28, 103.53]), T.RandomCrop(),
+    T.RandomHorizontalFlip(), T.BatchRandomResize(
+        target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],
+        interp='RANDOM'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+eval_transforms = T.Compose([
+    T.Resize(
+        target_size=480, interp='CUBIC'), T.Normalize(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+])
+
+# 定义训练和验证所用的数据集
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0.0/paddlex/cv/datasets/voc.py
+train_dataset = pdx.datasets.VOCDetection(
+    data_dir='work',
+    file_list='work/train_list.txt',
+    label_list='work/label_list.txt',
+    transforms=train_transforms,
+    shuffle=True)
+
+eval_dataset = pdx.datasets.VOCDetection(
+    data_dir='work',
+    file_list='work/val_list.txt',
+    label_list='work/label_list.txt',
+    transforms=eval_transforms,
+    shuffle=False)
+
+# YOLO检测模型的预置anchor生成
+# API说明: https://github.com/PaddlePaddle/PaddleX/blob/release/2.0.0/paddlex/tools/anchor_clustering/yolo_cluster.py
+anchors = train_dataset.cluster_yolo_anchor(num_anchors=9, image_size=480)
+anchor_masks = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
+
+# 初始化模型,并进行训练
+# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0.0/tutorials/train#visualdl可视化训练指标
+num_classes = len(train_dataset.labels)
+model = pdx.det.YOLOv3(
+    num_classes=num_classes,
+    backbone='DarkNet53',
+    anchors=anchors.tolist() if isinstance(anchors, np.ndarray) else anchors,
+    anchor_masks=[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
+    label_smooth=True,
+    ignore_threshold=0.6)
+
+# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0.0/paddlex/cv/models/detector.py
+# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
+model.train(
+    num_epochs=200,  # 训练轮次
+    train_dataset=train_dataset,  # 训练数据
+    eval_dataset=eval_dataset,  # 验证数据
+    train_batch_size=16,  # 批大小
+    pretrain_weights='COCO',  # 预训练权重
+    learning_rate=0.005 / 12,  # 学习率
+    warmup_steps=500,  # 预热步数
+    warmup_start_lr=0.0,  # 预热起始学习率
+    save_interval_epochs=5,  # 每5个轮次保存一次,有验证数据时,自动评估
+    lr_decay_epochs=[85, 135],  # step学习率衰减
+    save_dir='output/yolov3_darknet53',  # 保存路径
+    use_vdl=True)  # 其用visuadl进行可视化训练记录

二进制
examples/helmet_detection/images/1.png


二进制
examples/helmet_detection/images/10.png


二进制
examples/helmet_detection/images/11.png


二进制
examples/helmet_detection/images/12.png


二进制
examples/helmet_detection/images/13.png


二进制
examples/helmet_detection/images/14.png


部分文件因为文件数量过多而无法显示