浏览代码

fix conflicts

FlyingQianMM 4 年之前
父节点
当前提交
0de88c6acd
共有 36 个文件被更改,包括 593 次插入260 次删除
  1. 0 171
      README_cn.md
  2. 145 0
      README_en.md
  3. 144 0
      docs/Resful_API/docs/readme.md
  4. 二进制
      docs/Resful_API/images/1.png
  5. 二进制
      docs/Resful_API/images/10.png
  6. 二进制
      docs/Resful_API/images/2.5.png
  7. 二进制
      docs/Resful_API/images/2.6.png
  8. 二进制
      docs/Resful_API/images/2.png
  9. 二进制
      docs/Resful_API/images/3.png
  10. 二进制
      docs/Resful_API/images/4.png
  11. 二进制
      docs/Resful_API/images/5.5.png
  12. 二进制
      docs/Resful_API/images/5.png
  13. 二进制
      docs/Resful_API/images/6.png
  14. 二进制
      docs/Resful_API/images/7.png
  15. 二进制
      docs/Resful_API/images/8.png
  16. 二进制
      docs/Resful_API/images/9.png
  17. 二进制
      docs/images/weichat.png
  18. 46 10
      dygraph/README.md
  19. 21 16
      dygraph/deploy/cpp/docs/manufacture_sdk/README.md
  20. 二进制
      dygraph/deploy/cpp/docs/manufacture_sdk/images/pipeline_arch.png
  21. 二进制
      dygraph/deploy/cpp/docs/manufacture_sdk/images/pipeline_det.png
  22. 7 1
      dygraph/deploy/cpp/model_deploy/common/include/transforms.h
  23. 22 4
      dygraph/deploy/cpp/model_deploy/common/src/transforms.cpp
  24. 51 2
      dygraph/deploy/cpp/model_deploy/paddlex/include/x_standard_config.h
  25. 5 1
      dygraph/deploy/cpp/model_deploy/paddlex/src/x_model.cpp
  26. 0 1
      dygraph/examples/meter_reader/README.md
  27. 0 1
      dygraph/examples/meter_reader/reader_infer.py
  28. 1 1
      dygraph/examples/meter_reader/train_segmentation.py
  29. 58 13
      dygraph/paddlex/cv/models/base.py
  30. 25 6
      dygraph/paddlex/cv/models/classifier.py
  31. 25 23
      dygraph/paddlex/cv/models/detector.py
  32. 2 2
      dygraph/paddlex/cv/models/load_model.py
  33. 19 4
      dygraph/paddlex/cv/models/segmenter.py
  34. 2 2
      dygraph/paddlex/cv/transforms/operators.py
  35. 1 1
      dygraph/paddlex/utils/__init__.py
  36. 19 1
      dygraph/paddlex/utils/checkpoint.py

+ 0 - 171
README_cn.md

@@ -1,171 +0,0 @@
-简体中文| [English](./README.md)
-
-
-
-
-<p align="center">
-  <img src="./docs/gui/images/paddlex.png" width="360" height ="55" alt="PaddleX" align="middle" />
-</p>
- <p align= "center"> PaddleX -- 飞桨全流程开发工具,以低代码的形式支持开发者快速实现产业实际项目落地 </p>
-
-[![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE) [![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg)](https://github.com/PaddlePaddle/PaddleX/releases) ![python version](https://img.shields.io/badge/python-3.6+-orange.svg) ![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
- ![QQGroup](https://img.shields.io/badge/QQ_Group-1045148026-52B6EF?style=social&logo=tencent-qq&logoColor=000&logoWidth=20)
-
-
-## PaddleX全面升级动态图,目前默认使用静态图版本,动态图版本位于[dygraph](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph)中。pip安装1.3.10版本对应使用静态图版本,pip安装2.0.0rc0即使用动态图版本。
-
-:hugs: PaddleX 集成飞桨智能视觉领域**图像分类**、**目标检测**、**语义分割**、**实例分割**任务能力,将深度学习开发全流程从**数据准备**、**模型训练与优化**到**多端部署**端到端打通,并提供**统一任务API接口**及**图形化开发界面Demo**。开发者无需分别安装不同套件,以**低代码**的形式即可快速完成飞桨全流程开发。
-
-:factory: **PaddleX** 经过**质检**、**安防**、**巡检**、**遥感**、**零售**、**医疗**等十多个行业实际应用场景验证,沉淀产业实际经验,**并提供丰富的案例实践教程**,全程助力开发者产业实践落地。
-
-
-
-:heart:**您可以前往  [完整PaddleX在线使用文档目录](https://paddlex.readthedocs.io/zh_CN/develop/index.html)  查看完整*Read the Doc* 格式的文档,获得更好的阅读体验**:heart:
-
-
-
-![](./docs/gui/images/paddlexoverview.png)
-
-
-
-## 安装
-
-**PaddleX提供三种开发模式,满足用户的不同需求:**
-
-1. **Python开发模式:**
-
-   通过简洁易懂的Python API,在兼顾功能全面性、开发灵活性、集成方便性的基础上,给开发者最流畅的深度学习开发体验。<br>
-
-  **前置依赖**
-> - paddlepaddle >= 1.8.4
-> - python >= 3.6
-> - cython
-> - pycocotools
-
-```
-pip install paddlex -i https://mirror.baidu.com/pypi/simple
-```
-详细安装方法请参考[PaddleX安装](https://paddlex.readthedocs.io/zh_CN/develop/install.html)
-
-
-2. **Padlde GUI模式:**
-
-   无代码开发的可视化客户端,应用Paddle API实现,使开发者快速进行产业项目验证,并为用户开发自有深度学习软件/应用提供参照。
-
-- 前往[PaddleX官网](https://www.paddlepaddle.org.cn/paddle/paddlex),申请下载PaddleX GUI一键绿色安装包。
-
-- 前往[PaddleX GUI使用教程](./docs/gui/how_to_use.md)了解PaddleX GUI使用详情。
-
-- [PaddleX GUI安装环境说明](./docs/gui/download.md)
-
-3. **PaddleX Restful:**  
-  使用基于RESTful API开发的GUI与Web Demo实现远程的深度学习全流程开发;同时开发者也可以基于RESTful API开发个性化的可视化界面
-- 前往[PaddleX RESTful API使用教程](./docs/gui/restful/introduction.md)  
-
-
-## 产品模块说明
-
-- **数据准备**:兼容ImageNet、VOC、COCO等常用数据协议,同时与Labelme、精灵标注助手、[EasyData智能数据服务平台](https://ai.baidu.com/easydata/)等无缝衔接,全方位助力开发者更快完成数据准备工作。
-
-- **数据预处理及增强**:提供极简的图像预处理和增强方法--Transforms,适配imgaug图像增强库,支持**上百种数据增强策略**,是开发者快速缓解小样本数据训练的问题。
-
-- **模型训练**:集成[PaddleClas](https://github.com/PaddlePaddle/PaddleClas), [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)视觉开发套件,提供大量精选的、经过产业实践的高质量预训练模型,使开发者更快实现工业级模型效果。
-
-- **模型调优**:内置模型可解释性模块、[VisualDL](https://github.com/PaddlePaddle/VisualDL)可视化分析工具。使开发者可以更直观的理解模型的特征提取区域、训练过程参数变化,从而快速优化模型。
-
-- **多端安全部署**:内置[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)模型压缩工具和**模型加密部署模块**,与飞桨原生预测库Paddle Inference及高性能端侧推理引擎[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite) 无缝打通,使开发者快速实现模型的多端、高性能、安全部署。
-
-
-
-## 完整使用文档及API说明
-
-- [完整PaddleX在线使用文档目录](https://paddlex.readthedocs.io/zh_CN/develop/index.html):heart:
-
-- [10分钟快速上手系列教程](https://paddlex.readthedocs.io/zh_CN/develop/quick_start.html)
-- [PaddleX模型训练教程集合](https://paddlex.readthedocs.io/zh_CN/develop/train/index.html)
-- [PaddleX API接口说明](https://paddlex.readthedocs.io/zh_CN/develop/apis/index.html)
-- [PaddleX RESTful API说明](https://paddlex.readthedocs.io/zh_CN/develop/gui/restful/introduction.html)
-
-### 在线项目示例
-
-为了使开发者更快掌握PaddleX API,我们创建了一系列完整的示例教程,您可通过AIStudio一站式开发平台,快速在线运行PaddleX的项目。
-
-- [PaddleX快速上手CV模型训练](https://aistudio.baidu.com/aistudio/projectdetail/450925)
-- [PaddleX快速上手——MobileNetV3-ssld 化妆品分类](https://aistudio.baidu.com/aistudio/projectdetail/450220)
-- [PaddleX快速上手——Faster-RCNN AI识虫](https://aistudio.baidu.com/aistudio/projectdetail/439888)
-- [PaddleX快速上手——DeepLabv3+ 视盘分割](https://aistudio.baidu.com/aistudio/projectdetail/440197)
-
-## 全流程产业应用案例:star:
-
-(continue to be updated)
-
-* 工业巡检:
-  * [工业表计读数](https://paddlex.readthedocs.io/zh_CN/develop/examples/meter_reader.html)
-* 工业质检:
-  * [铝材表面缺陷检测](https://paddlex.readthedocs.io/zh_CN/develop/examples/industrial_quality_inspection/README.html)
-* 卫星遥感:
-  * [RGB遥感影像分割](https://paddlex.readthedocs.io/zh_CN/develop/examples/remote_sensing.html)
-  * [多通道遥感影像分割](https://paddlex.readthedocs.io/zh_CN/develop/examples/multi-channel_remote_sensing/README.html)
-  * [地块变化检测](https://paddlex.readthedocs.io/zh_CN/develop/examples/multi-channel_remote_sensing/README.html)
-* [人像分割](https://paddlex.readthedocs.io/zh_CN/develop/examples/human_segmentation.html)
-* 模型多端安全部署
-  * [CPU/GPU(加密)部署](https://paddlex.readthedocs.io/zh_CN/develop/deploy/server/index.html)
-  * [OpenVINO加速部署](https://paddlex.readthedocs.io/zh_CN/develop/deploy/openvino/index.html)
-  * [Nvidia Jetson开发板部署](https://paddlex.readthedocs.io/zh_CN/develop/deploy/jetson/index.html)
-  * [树莓派部署](https://paddlex.readthedocs.io/zh_CN/develop/deploy/raspberry/index.html)
-
-* [模型可解释性](https://paddlex.readthedocs.io/zh_CN/develop/appendix/interpret.html)
-
-## :question:[FAQ](./docs/gui/faq.md):question:
-
-## 交流与反馈
-
-- 项目官网:https://www.paddlepaddle.org.cn/paddle/paddlex
-
-- PaddleX用户交流群:957286141 (手机QQ扫描如下二维码快速加入)  
-
-  <p align="center">
-    <img src="./docs/gui/images/QR2.jpg" width="250" height ="360" alt="QR" align="middle" />
-  </p>
-
-
-
-## 更新日志
-
-> [历史版本及更新内容](https://paddlex.readthedocs.io/zh_CN/develop/change_log.html)
-- **2020.09.07 v1.2.0**
-
-  新增产业最实用目标检测模型PP-YOLO,FasterRCNN、MaskRCNN、YOLOv3、DeepLabv3p等模型新增内置COCO数据集预训练模型,适用于小模型精调。新增多种Backbone,优化体积及预测速度。优化OpenVINO、PaddleLite Android、服务端C++预测部署方案,新增树莓派部署方案等。
-
-- **2020.07.12 v1.1.0**
-
-  新增人像分割、工业标记读数案例。模型新增HRNet、FastSCNN、FasterRCNN,实例分割MaskRCNN新增Backbone HRNet。集成X2Paddle,PaddleX所有分类模型和语义分割模型支持导出为ONNX协议。新增模型加密Windows平台支持。新增Jetson、Paddle Lite模型部署预测方案。
-
-- **2020.05.20 v1.0.0**
-
-  新增C++和Python部署,模型加密部署,分类模型OpenVINO部署。新增模型可解释性接口
-
-- **2020.05.17 v0.1.8**
-
-  新增EasyData平台数据标注格式,支持imgaug数据增强库的pixel-level算子
-
-## 近期活动更新
-
-- 2020.12.16
-
-  《直击深度学习部署最后一公里 C#软件部署实战》b站直播中奖用户名单请点击[PaddleX直播中奖名单](./docs/luckydraw.md)查看~
-
-- 2020.12.09
-
-  往期直播《直击深度学习部署最后一公里 目标检测兴趣小组》回放链接:https://www.bilibili.com/video/BV1rp4y1q7ap?from=search&seid=105037779997274685
-
-## :hugs: 贡献代码:hugs:
-
-我们非常欢迎您为PaddleX贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交Pull Requests。
-
-### 开发者贡献项目
-
-* [工业相机实时目标检测GUI](https://github.com/xmy0916/SoftwareofIndustrialCameraUsePaddle)
-(windows系统,基于pyqt5开发)
-* [工业相机实时目标检测GUI](https://github.com/LiKangyuLKY/PaddleXCsharp)
-(windows系统,基于C#开发)

+ 145 - 0
README_en.md

@@ -0,0 +1,145 @@
+[简体中文](./README_cn.md) | English
+
+
+
+
+
+<p align="center">
+  <img src="./docs/gui/images/paddlex.png" width="360" height ="55" alt="PaddleX" align="middle" />
+</p>
+
+
+<p align= "center"> PaddleX -- PaddlePaddle End-to-End Development Toolkit,
+  enables developers to implement real industry projects in a low-code form quickly </p>
+
+[![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE) [![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg)](https://github.com/PaddlePaddle/PaddleX/releases) ![python version](https://img.shields.io/badge/python-3.6+-orange.svg) ![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
+ ![QQGroup](https://img.shields.io/badge/QQ_Group-1045148026-52B6EF?style=social&logo=tencent-qq&logoColor=000&logoWidth=20)
+
+
+## PaddleX dynamic graph mode is ready! Static graph mode is set as default and dynamic graph code base is in [dygraph](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph). If you want to use static graph mode, the version 1.3.11 can be installed by pip. The version 2.0.0rc0 corresponds to the dynamic graph mode.
+
+
+:hugs:  PaddleX integrated the abilities of **Image classification**, **Object detection**, **Semantic segmentation**, and **Instance segmentation** in the Paddle CV toolkits, and get through the whole-process development from **Data preparation** and **Model training and optimization** to **Multi-end deployment**. At the same time, PaddleX provides **Succinct APIs** and a **Graphical User Interface**. Developers can quickly complete the end-to-end process development of the Paddle in a form of **low-code**  without installing different libraries.
+
+**:factory: PaddleX** has been validated in a dozen of industry application scenarios such as **Quality Inspection**, **Security**, **Patrol Inspection**, **Remote Sensing**, **Retail**,  **Medical** etc.. In addition, it **provides a wealth of case practice tutorials**, to help developer could apply to actual cases easily.
+
+
+
+:heart: **You can go to [Complete PaddleX Online Documentation Contents](https://paddlex.readthedocs.io/zh_CN/develop_en/index.html) for complete tutorial with the format of *Read the Doc* and better reading experience​** :heart:
+
+
+
+![](./docs/gui/images/paddlexoverview_en.jpg)
+
+
+
+## Installation
+
+**PaddleX has two development modes to meet different needs of users:**
+
+1.**Python development mode:**
+
+The design of PaddleX Python API taking into account of comprehensive functions, development flexibility, and integration convenience, giving developers the smoothest deep learning development experience.
+
+**Pre-dependence**
+
+> - paddlepaddle >= 1.8.4
+> - python >= 3.6
+> - cython
+> - pycocotools
+
+You shuould use command of python3 and pip3 instead if you have python2 installed.
+
+```
+pip install paddlex -i https://mirror.baidu.com/pypi/simple
+```
+Please refer to the [PaddleX installation](https://paddlex.readthedocs.io/zh_CN/develop/install.html) for detailed installation method.
+
+
+2. **Padlde GUI(Graphical User Interface) mode:**
+
+It's a all-in-one client enable develops could implement deep learning projects without code.
+
+- Go to [PaddleX Official Website](https://www.paddlepaddle.org.cn/paddle/paddlex) to download the all-in-one client.
+
+- Go to [PaddleX GUI tutorial](./docs/gui/how_to_use.md ) for details of using it.
+
+- [PaddleX GUI Environment Requirements for Installation](./docs/gui/download.md)
+
+
+## Product Module Description
+
+- **Data preparation**: Compatible with common data protocols such as ImageNet, VOC, COCO, and seamlessly interconnecting with Labelme, Colabeler, and [EasyData intelligent data service platform](https://ai.baidu.com/easydata/), to help developers to quickly complete data preparations.
+- **Data pre-processing and enhancement**: Provides a minimalist image pre-processing and enhancement method--Transforms. Adapts imgaug which is a powerful image enhancement library, so that PaddleX could supports **Hundreds of data enhancement strategies**, which makes developers quickly alleviate the situation of traing with small sample dataset.
+- **Model training**: PaddleX integrates [PaddleClas](https://github.com/PaddlePaddle/PaddleClas), [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), and [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) etcs. So it provides a large number of selected, industry-proven, high-quality pre-trained models, enabling developers to achieve the industry requirements much more quickly.
+- **Model tuning**: Model-interpretability module and [VisualDL](https://github.com/PaddlePaddle/VisualDL) visual analysis tool are integrated as well. It allows developers to understand the model's feature extraction region and the change of the training process parameters more intuitively , so as to quickly optimize the model.
+- **Multi-End Secure Deployment**: The built-in model compression tool-- [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)  and **Model Encryption Deployment Module**, are seamlessly interconnected with native prediction library **Paddle Inference** and Multi-platform high performance deep learning inference engine-- [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite) , to enable developers to quickly implement multi-end, high-performance, secure deployments of the model.
+
+
+
+## Full Documentation and API Description
+
+- [Complete PaddleX online documentation contents](https://paddlex.readthedocs.io/zh_CN/develop_en/):heart:
+
+- [10-Minute Quick Start Tutorial Series](https://paddlex.readthedocs.io/zh_CN/develop/quick_start.html)
+- [Collection of PaddleX Model Training Tutorials](https://paddlex.readthedocs.io/zh_CN/develop/train/index.html)
+- [PaddleX API Interface Description](https://paddlex.readthedocs.io/zh_CN/develop/apis/index.html)
+
+### Examples of Online Projects
+
+To get developers up to speed with the PaddleX API, we've created a complete series of sample tutorials that you can run PaddleX projects online through the **AIStudio** quickly.
+
+- [PaddleX Quick Start - CV Model Training](https://aistudio.baidu.com/aistudio/projectdetail/450925)
+- [PaddleX Quick Start - MobileNetV3-ssld Cosmetics Classification](https://aistudio.baidu.com/aistudio/projectdetail/450220)
+- [PaddleX Quick Start - Faster-RCNN AI Bug Recognition](https://aistudio.baidu.com/aistudio/projectdetail/439888)
+- [PaddleX Quick Start - DeepLabv3+ Semantic Segmentation](https://aistudio.baidu.com/aistudio/projectdetail/440197)
+
+
+
+## Full Process Industry Applications:star:
+
+(continue to be updated)
+
+* Industrial inspections:
+  - [Industrial Meter Readings](https://paddlex.readthedocs.io/zh_CN/develop_en/examples/meter_reader.html)
+* [Industrial quality control](https://paddlex.readthedocs.io/zh_CN/develop_en/examples/industrial_quality_inspection/README.html)
+* Satellite Image Understanding:
+  * [RGB Satellite Image Segmentation](https://paddlex.readthedocs.io/zh_CN/develop_en/examples/remote_sensing.html)
+  * [Multi-Channel Satellite Image Segmentation](https://paddlex.readthedocs.io/zh_CN/develop_en/examples/multi-channel_remote_sensing/README.html)
+  * [Land Parcel Change Detection](https://paddlex.readthedocs.io/zh_CN/develop_en/examples/change_detection.html)
+* [Portrait Segmentation](https://paddlex.readthedocs.io/zh_CN/develop_en/examples/human_segmentation.html)
+* Multi-platform Deployment with Encryption
+  - [CPU/GPU (Encryption) deployment](https://paddlex.readthedocs.io/zh_CN/develop_en/deploy/server/index.html)
+  - [Deployment with OpenVINO toolkit](https://paddlex.readthedocs.io/zh_CN/develop_en/deploy/openvino/index.html)
+  - [Deploy on Nvidia Jetson](https://paddlex.readthedocs.io/zh_CN/develop_en/deploy/nvidia-jetson.html)
+  - [Deploy on Raspberry Pi](https://paddlex.readthedocs.io/zh_CN/develop_en/deploy/raspberry/index.html)
+
+
+
+## :question:[FAQ](./docs/gui/faq.md):question:
+
+
+
+## Communication and Feedback
+
+- Project official website: https://www.paddlepaddle.org.cn/paddle/paddlex
+- PaddleX user group: 957286141 (Scan the following QR code on Mobile QQ to join quickly)
+
+<p align="center">
+  <img src="./docs/gui/images/QR2.jpg" width="250" height ="360" alt="QR" align="middle" />
+</p>
+
+## Release Note
+
+> [Complete Release Note](https://paddlex.readthedocs.io/zh_CN/develop/change_log.html)
+- 2020.12.20 v1.3.0
+- 2020.09.05 v1.2.0
+- 2020.07.13 v1.1.0
+- 2020.07.12 v1.0.8
+- 2020.05.20 v1.0.0
+
+
+
+## :hugs: Contribution :hugs:
+
+You are welcomed to contribute codes to PaddleX or provide suggestions. If you can fix an issue or add a new feature, please feel free to submit Pull Requests.

+ 144 - 0
docs/Resful_API/docs/readme.md

@@ -0,0 +1,144 @@
+# PaddleX_Restful API --快速搭建私有化训练云服务
+
+* ## 什么是Resetful
+* ## PaddleX_Restful API 说明
+* ## 如何快速使用PaddleX_Restful API 快速搭建私有化训练云平台
+
+
+
+## *什么是Resetful*
+
+RESTFUL是一种网络应用程序的设计风格和开发方式,基于HTTP,可以使用XML格式定义或JSON格式定义。RESTFUL适用于移动互联网厂商作为业务接口的场景,实现第三方OTT调用移动网络资源的功能,动作类型为新增、变更、删除所调用资源。
+
+简单来说就是用户可以起一个远端的服务,客户端通过http形式进行访问。
+
+## *PaddleX_Restful API 说明*
+
+PaddleX RESTful是基于PaddleX开发的RESTful API。对于开发者来说只需要简单的指令便可开启PaddleX RESTful服务。对于哪些有远程训练要求,同时为了数据保密的开发者来说,PaddleX_Restful API简单易用的操作可以很好的满足上述要求。
+
+开启RESTful服务后可以实现如下功能:
+
+* 通过下载基于RESTful API的GUI连接开启RESTful服务的服务端,实现远程深度学习全流程开发。
+* 通过使用web demo连接开启RESTful服务的服务端,实现远程深度学习全流程开发。
+* 根据RESTful API来开发您自己个性化的可视化界面。
+
+
+<div align="center">
+<img src="../images/1.png"  width = "500" />              </div>
+
+## *如何快速使用PaddleX_Restful API 快速搭建私有化训练云平台*
+
+在该示例中PaddleX_Restful运行在一台带GPU的linux服务器下,用户通过其他电脑连接该服务器进行远程的操作。
+### 1  环境准备
+在服务器下载PaddlePaddle和PaddleX及其他依赖
+
+* 下载PaddlePaddle
+
+`pip install paddlepaddle-gpu -i `
+
+* 下载PaddleX
+
+pip install paddlex==1.3.11 -i
+
+* 下载pycuda(如果不使用GPU,该项可不进行下载)
+
+pip install pycuda -i
+
+### 2  启动Restful服务
+
+在服务器上启动如下命令,其中端口号是用户自定义的,`workspace_dir`是用户在服务器创建的
+
+`paddlex_restful --start_restful --port [端口号] --workspace_dir [工作空间地址]`
+
+例如开启一个端口为27000,工作路径在`cv/x/resful_space`的一个服务
+
+`paddlex_restful --start_restful --port 27000 --workspace_dir cv/x/resful_space`
+
+<div align="center">
+<img src="../images/2.png"  width = "800" />              </div>
+
+出现上述图片所示的内容,即为开启服务成功。
+
+### 3 启动客户端进行远程全流程训练
+
+为了方便大家进行远程调试,PaddleX_Restful提供了两张访问形式,一个是Web图形化界面,另一个是客户端的图形化界面
+* ## Web图像化界面
+
+### 3.1 打开Web界面
+当用户启动Restful服务后,在Web界面的导航栏只需要输入IP地址和端口号即可。例如当前案例的IP地址是222.95.100.37 端口号是25001
+
+即在导航栏输入 `http://222.95.100.37:25001/` 即可出现如下界面
+
+<div align="center">
+<img src="../images/2.5.png"  width = "800" />              </div>
+
+### 3.2 服务器设置
+点击界面中内容,在将服务器进行设置
+
+<div align="center">
+<img src="../images/2.6.png"  width = "800" />              </div>
+
+### 3.3 下载示例项目
+
+用户根据自己的需求,选择是否下载示例项目
+
+<div align="center">
+<img src="../images/3.png"  width = "800" />              </div>
+
+最终画面如下图所示
+
+<div align="center">
+<img src="../images/4.png"  width = "800" />              </div>
+
+### 3.4 创建数据集
+用户如果要自定义训练,首先需要去创建用户自身的数据集,
+在此之前,用户首先需要将数据集上传到服务器上。
+
+<div align="center">
+<img src="../images/5.png"  width = "800" />              </div>
+
+输入在服务上数据存储的路径,开始导入数据,在服务器上传的数据集,必须是符合PaddleX训练数据的命名格式要求。
+
+<div align="center">
+<img src="../images/5.5.png"  width = "800" />              </div>
+
+<div align="center">
+<img src="../images/6.png"  width = "800" />              </div>
+
+数据导入成功后,进行数据集划分
+<div align="center">
+<img src="../images/7.png"  width = "800" />              </div>
+
+用户在划分完成数据集后,也可对数据集进行可视化观察
+<div align="center">
+<img src="../images/8.png"  width = "800" />              </div>
+
+### 3.5 开始训练
+
+在数据集创建完成后,用户可创建新项目,并进行训练
+
+<div align="center">
+<img src="../images/9.png"  width = "800" />              </div>
+
+配置好相关参数后,点击开始训练,便开始进行训练。
+<div align="center">
+<img src="../images/10.png"  width = "800" />              </div>
+
+
+* ## 客户端图形化界面
+客户端操作流程和Web界面基本一致,提供了MAC和Windows版本两种,用户可自行下载并操作
+
+- [MAC](https://bj.bcebos.com/paddlex/PaddleX_Remote_GUI/mac/PaddleX_Remote_GUI.zip)
+- [Windows](https://bj.bcebos.com/paddlex/PaddleX_Remote_GUI/windows/PaddleX_Remote_GUI.zip)
+
+### 4  Restful 二次开发说明
+
+开发者可以使用PaddleX RESTful API 进行二次开发,按照自己的需求开发可视化界面,详细请参考以下文档  
+
+[RESTful API 二次开发简介](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/gui/restful/restful.md)  
+
+[快速开始](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/gui/restful/quick_start.md)  
+
+[API 参考文档](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/gui/restful/restful_api.md)  
+
+[自定义数据结构](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/gui/restful/data_struct.md)

二进制
docs/Resful_API/images/1.png


二进制
docs/Resful_API/images/10.png


二进制
docs/Resful_API/images/2.5.png


二进制
docs/Resful_API/images/2.6.png


二进制
docs/Resful_API/images/2.png


二进制
docs/Resful_API/images/3.png


二进制
docs/Resful_API/images/4.png


二进制
docs/Resful_API/images/5.5.png


二进制
docs/Resful_API/images/5.png


二进制
docs/Resful_API/images/6.png


二进制
docs/Resful_API/images/7.png


二进制
docs/Resful_API/images/8.png


二进制
docs/Resful_API/images/9.png


二进制
docs/images/weichat.png


+ 46 - 10
dygraph/README.md

@@ -7,6 +7,13 @@
 </p>
  <p align= "center"> PaddleX -- 飞桨全流程开发工具,以低代码的形式支持开发者快速实现产业实际项目落地 </p>
 
+## :heart:重磅功能升级
+* 全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署[欢迎体验](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)。
+
+* PaddleX部署全面升级,支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的统一部署能力。[欢迎体验](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)。
+
+
+
 [![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE) [![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg)](https://github.com/PaddlePaddle/PaddleX/releases) ![python version](https://img.shields.io/badge/python-3.6+-orange.svg) ![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
  ![QQGroup](https://img.shields.io/badge/QQ_Group-1045148026-52B6EF?style=social&logo=tencent-qq&logoColor=000&logoWidth=20)
 
@@ -34,13 +41,21 @@
 
   **前置依赖**
 > - paddlepaddle == 2.1.0
-> - python >= 3.6
-> - cython
-> - pycocotools
+> - 安装PaddlePaddle Develop版本,具体PaddlePaddle[安装主页](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/develop/install/pip/windows-pip.html)
+
+**安装方式**
+
+> - git clone --recurse-submodules https://github.com/PaddlePaddle/PaddleX.git
+> - cd PaddleX/dygraph
+> - pip install -r requirements.txt
+> - pip install -r submodules.txt
+> - python setup.py install
+
 
-```
-pip install paddlex==2.0.0rc -i https://mirror.baidu.com/pypi/simple
-```
+**特别说明**   Windows除了执行上述命令外,还需要下载pycocotools
+
+> - pip install cython
+> - pip install git+https://gitee.com/jiangjiajun/philferriere-cocoapi.git#subdirectory=PythonAPI
 
 
 2. **Padlde GUI模式:**
@@ -55,16 +70,37 @@ pip install paddlex==2.0.0rc -i https://mirror.baidu.com/pypi/simple
 
 3. **PaddleX Restful:**  
   使用基于RESTful API开发的GUI与Web Demo实现远程的深度学习全流程开发;同时开发者也可以基于RESTful API开发个性化的可视化界面
-- 前往[PaddleX RESTful API使用教程](../docs/gui/restful/introduction.md)  
+- 前往[PaddleX RESTful API使用教程](../docs/Resful_API/docs/readme.md)  
 
 
 ## 使用教程
 
-- [模型训练教程](https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train)
-- [模型剪裁教程](https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/slim/prune)
+1. **API模式:**
+
+- [模型训练](https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/train)
+- [模型剪裁](https://github.com/PaddlePaddle/PaddleX/tree/release/2.0-rc/tutorials/slim/prune)
+
+
+
+2. **GUI模式:**
+
+- [图像分类](https://www.bilibili.com/video/BV1nK411F7J9?from=search&seid=3068181839691103009)
+- [目标检测](https://www.bilibili.com/video/BV1HB4y1A73b?from=search&seid=3068181839691103009)
+- [实例分割](https://www.bilibili.com/video/BV1M44y1r7s6?from=search&seid=3068181839691103009)
+- [语义分割](https://www.bilibili.com/video/BV1qQ4y1Z7co?from=search&seid=3068181839691103009)
 
+3. **模型部署:**
+- [Manufacture SDK](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)
+提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署
+- [PaddleX Deploy](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp) 支持飞桨视觉套件PaddleDetection、PaddleClas、PaddleSeg、PaddleX的统一部署能力
+## 产业级应用示例
 
-## :question:[FAQ](../docs/gui/faq.md):question:
+- [钢筋计数](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/examples/rebar_count)
+- [缺陷检测](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/examples/defect_detection)
+- [机械手抓取](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/examples/robot_grab)
+- [表计检测]()
+## :question:[FAQ]
+(../docs/gui/faq.md):question:
 
 ## 交流与反馈
 

+ 21 - 16
dygraph/deploy/cpp/docs/manufacture_sdk/README.md

@@ -4,12 +4,17 @@ PaddleX-Deploy全面升级,支持飞桨视觉套件PaddleX、PaddleDetection
 
 在工业部署的开发过程中,常常因环境问题导致在部署代码编译环节中耗费较多的时间和人力成本。如果产线上的业务逻辑稍微复杂一点,尤其是串联多个模型时,则需要在模型推理前插入预处理、中间结果处理等操作,如此复杂的逻辑对应的部署代码开发工程量是很大的。
 
-为更进一步地提升部署效率,**:heart:PaddleaX部署全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署。**
+为更进一步地提升部署效率,**:heart:PaddleX部署全新发布Manufacture SDK,提供工业级多端多平台部署加速的预编译飞桨部署开发包(SDK),通过配置业务逻辑流程文件即可以低代码方式快速完成推理部署。:heart:**
 
 
-#目录
+## 目录
+* [1 Manufactue SDK简介](#1)
+* [2 下载安装Manufacture SDK](#2)
+* [3 Pipeline配置文件说明](#3)
+* [4 Pipeline Node说明<](#4)
+* [5 多模型串联的工业表计读数部署](#5)
 
-## Manufactue SDK简介
+## <h2 id="1">1 Manufactue SDK简介</h2>
 
 PaddleX Manufacture基于[PaddleX-Deploy](https://github.com/PaddlePaddle/PaddleX/tree/develop/dygraph/deploy/cpp)的端到端高性能部署能力,将应用深度学习模型的业务逻辑抽象成Pipeline,而接入深度学习模型前的数据前处理、模型预测、模型串联时的中间结果处理等操作都对应于Pipeline中的节点PipelineNode,用户只需在Pipeline配置文件中编排好各节点的前后关系,就可以给Pipeline发送数据并快速地获取相应的推理结果。Manufacture SDK的架构设计如下图所示:
 
@@ -26,7 +31,7 @@ PaddleX Manufacture基于[PaddleX-Deploy](https://github.com/PaddlePaddle/Paddle
 <div align="center">
 <img src="images/pipeline_det.png"  width = "600" />              </div>
 
-## 下载安装Manufacture SDK
+## <h2 id="2">2 下载安装Manufacture SDK</h2>
 
 Manufature SDK的文件夹结构如下所示:
 
@@ -45,7 +50,7 @@ Manufature SDK的文件夹结构如下所示:
 | -- | -- |
 | | |
 
-## Pipeline配置文件说明
+## <h2 id="3">3 Pipeline配置文件说明</h2>
 
 PaddleX的模型导出后都会在模型文件夹中自动生成一个名为`pipeline.yml`流程编排文件,下面展示单一检测模型的流程配置文件:
 
@@ -74,23 +79,23 @@ pipeline_nodes:
 | 关键字 | 键值 |
 | -- | -- |
 | pipeline_name | Pipeline的名称 |
-| pipeline_nodes | Pipeline的节点列表。列表中**必须包含输入节点(Source)和输出节点(Sink)**。**列表中每个节点还是一个字典,关键字是该节点的名字,键值用于定义节点类型(type)、节点初始化参数(init_params)、连接的下一个节点的名字(next) 。**需要注意的是,**每个节点的名字是独语无二的**。|
+| pipeline_nodes | Pipeline的节点列表。列表中**必须包含输入节点(Source)和输出节点(Sink)**。**列表中每个节点还是一个字典,关键字是该节点的名字,键值用于定义节点类型(type)、节点初始化参数(init_params)、连接的下一个节点的名字(next),需要注意的是,每个节点的名字是独语无二的**。|
 
-## Pipeline Node说明
+## <h2 id="4">4 Pipeline Node说明</h2>
 
 目前支持的功能节点有:输入、图像解码、图像缩放、感兴趣区域提取、模型推理、检测框过滤、检测/分割结果可视化、输出。各功能节点的类型、初始化参数说明如下:
 
 | 功能类型 type | 功能作用 |初始化参数 init_params | 下一个节点 next | 上一个节点 |
-| -- | -- | -- | -- |
-| Source | 接收Pipeline所需的输入数据 | 无 | `str|List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 无 |
-| Decode | 图像解码 | 无 | `str|List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 只能有一个 |
-| Resize | 图像大小缩放 | `width (int)`: 目标宽;`height (int)`: 目标高;`interp (int)`:差值类型,默认为`1`;| 只能有一个 |
-| Predict | PaddleX导出的分类/检测/分割模型预测 | `model_dir (str)`: PaddleX导出后的模型文件夹所在路径;`use_gpu (bool)`: 是否使用GPU,默认为`false`;`gpu_id (int)`:GPU卡号,在`use_gpu`为`true`时有效 | 只能有一个 |
-| FilterBbox | 过滤置信度低于阈值的检测框 | `score_thresh (float)`: 置信度阈值 | `str|List(str)`: 可以是单个节点名字或多个节点名字组成的列表 |  |
-| RoiCrop | 感兴趣区域提取 | 无 | `str|List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 必须有两个:能给出图像数据的节点、能给出检测模型预测结果的节点 |
+| -- | -- | -- | -- | -- |
+| Source | 接收Pipeline所需的输入数据 | 无 | `str/List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 无 |
+| Decode | 图像解码 | 无 | `str/List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 只能有一个 |
+| Resize | 图像大小缩放 | `width (int)`: 目标宽;<br>`height (int)`: 目标高;<br>`interp (int)`:差值类型,默认为`1`;| str/List(str): 可以是单个节点名字或多个节点名字组成的列表 | 只能有一个 |
+| Predict | PaddleX导出的分类/检测/分割模型预测 | `model_dir (str)`: PaddleX导出后的模型文件夹所在路径;<br>`use_gpu (bool)`: 是否使用GPU,默认为`false`;<br>`gpu_id (int)`:GPU卡号,在`use_gpu`为`true`时有效 | `str/List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 只能有一个 |
+| FilterBbox | 过滤置信度低于阈值的检测框 | `score_thresh (float)`: 置信度阈值 | `str/List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 只能有一个 |
+| RoiCrop | 感兴趣区域提取 | 无 | `str/List(str)`: 可以是单个节点名字或多个节点名字组成的列表 | 必须有两个:能给出图像数据的节点、能给出检测模型预测结果的节点 |
 | Visualize | 目标检测/实例分割/语义分割模型预测结果可视化 | `save_dir (str)`: 存储可视化结果的文件夹路径 | 无 (可视化结果只能本地保存) | 必须有两个:能给出图像数据的节点、能给目标检测/实例分割/语义分割模型预测结果的节点 |
 | Sink | 获取Pipeline的输出数据 | 无 | 无 | 只能有一个 |
 
-**注意:上一个节点不需要在Pipeline配置文件中指定,只需要指定下一个节点即可,实际运行时程序解析连接至上一个节点*。
+**注意:上一个节点不需要在Pipeline配置文件中指定,只需要指定下一个节点即可,实际运行时程序解析连接至上一个节点**
 
-## 多模型串联的工业表计读数部署
+## <h2 id="5">多模型串联的工业表计读数部署</h2>

二进制
dygraph/deploy/cpp/docs/manufacture_sdk/images/pipeline_arch.png


二进制
dygraph/deploy/cpp/docs/manufacture_sdk/images/pipeline_det.png


+ 7 - 1
dygraph/deploy/cpp/model_deploy/common/include/transforms.h

@@ -68,8 +68,8 @@ class Normalize : public Transform {
       if (is_scale_) {
         alpha /= (max_val_[c] - min_val_[c]);
       }
+      double beta = -1.0 * (mean_[c] + min_val_[c] * alpha) / std_[c];
       alpha /= std_[c];
-      double beta = -1.0 * mean_[c] / std_[c];
 
       alpha_.push_back(alpha);
       beta_.push_back(beta);
@@ -166,6 +166,11 @@ class Resize : public Transform {
     } else {
       use_scale_ = true;
     }
+    if (item["keep_ratio"].IsDefined()) {
+      keep_ratio_ = item["keep_ratio"].as<bool>();
+    } else {
+      keep_ratio_ = false;
+    }
     height_ = item["height"].as<int>();
     width_ = item["width"].as<int>();
     if (height_ <= 0 || width_ <= 0) {
@@ -184,6 +189,7 @@ class Resize : public Transform {
   int width_;
   int interp_;
   bool use_scale_;
+  bool keep_ratio_;
 };
 
 class BGR2RGB : public Transform {

+ 22 - 4
dygraph/deploy/cpp/model_deploy/common/src/transforms.cpp

@@ -147,9 +147,15 @@ bool Resize::Run(cv::Mat *im) {
               << std::endl;
     return false;
   }
+  double scale_w = width_ / static_cast<double>(im->cols);
+  double scale_h = height_ / static_cast<double>(im->rows);
+  if (keep_ratio_) {
+    scale_h = std::min(scale_w, scale_h);
+    scale_w = scale_h;
+    width_ = static_cast<int>(round(scale_w * im->cols));
+    height_ = static_cast<int>(round(scale_h * im->rows));
+  }
   if (use_scale_) {
-    double scale_w = width_ / static_cast<double>(im->cols);
-    double scale_h = height_ / static_cast<double>(im->rows);
     cv::resize(*im, *im, cv::Size(), scale_w, scale_h, interp_);
   } else {
     cv::resize(*im, *im, cv::Size(width_, height_), 0, 0, interp_);
@@ -161,8 +167,20 @@ bool Resize::ShapeInfer(
         const std::vector<int>& in_shape,
         std::vector<int>* out_shape) {
   out_shape->clear();
-  out_shape->push_back(width_);
-  out_shape->push_back(height_);
+  double width = width_;
+  double height = height_;
+  if (keep_ratio_) {
+    int w = in_shape[0];
+    int h = in_shape[1];
+    double scale_w = width_ / static_cast<double>(w);
+    double scale_h = height_ / static_cast<double>(h);
+    scale_h = std::min(scale_w, scale_h);
+    scale_w = scale_h;
+    width = static_cast<int>(round(scale_w * w));
+    height = static_cast<int>(round(scale_h * h));
+  }
+  out_shape->push_back(width);
+  out_shape->push_back(height);
   return true;
 }
 

+ 51 - 2
dygraph/deploy/cpp/model_deploy/paddlex/include/x_standard_config.h

@@ -51,6 +51,9 @@ void XNormalize(const YAML::Node& src, YAML::Node* dst) {
     (*dst)["transforms"]["Normalize"]["mean"].push_back(mean[i]);
     (*dst)["transforms"]["Normalize"]["std"].push_back(std[i]);
   }
+  if (src["is_scale"].IsDefined()) {
+    (*dst)["transforms"]["Normalize"]["is_scale"] = src["is_scale"];
+  }
 }
 
 void XResize(const YAML::Node& src, YAML::Node* dst) {
@@ -62,8 +65,13 @@ void XResize(const YAML::Node& src, YAML::Node* dst) {
     w = src["target_size"].as<int>();
     h = src["target_size"].as<int>();
   } else if (src["target_size"].IsSequence()) {
-    w = src["target_size"].as<std::vector<int>>()[0];
-    h = src["target_size"].as<std::vector<int>>()[1];
+    if ((*dst)["version"].as<std::string>() >= "2.0.0") {
+      h = src["target_size"].as<std::vector<int>>()[0];
+      w = src["target_size"].as<std::vector<int>>()[1];
+    } else {
+      w = src["target_size"].as<std::vector<int>>()[0];
+      h = src["target_size"].as<std::vector<int>>()[1];
+    }
   } else {
     std::cerr << "[ERROR] Unexpected value type of `target_size`" << std::endl;
     assert(false);
@@ -87,6 +95,9 @@ void XResize(const YAML::Node& src, YAML::Node* dst) {
       assert(false);
     }
   }
+  if (src["keep_ratio"].IsDefined() && src["keep_ratio"].as<bool>()) {
+    (*dst)["transforms"]["Resize"]["keep_ratio"] = true;
+  }
   (*dst)["transforms"]["Resize"]["width"] = w;
   (*dst)["transforms"]["Resize"]["height"] = h;
   (*dst)["transforms"]["Resize"]["interp"] = interp;
@@ -116,6 +127,44 @@ void XResizeByShort(const YAML::Node& src, YAML::Node* dst) {
   (*dst)["transforms"]["ResizeByShort"]["use_scale"] = false;
 }
 
+// dygraph version
+void XPaddingV2(const YAML::Node& src, YAML::Node* dst) {
+  if (src["target_size"].IsDefined() &&
+      src["target_size"].Type() != YAML::NodeType::Null) {
+    assert(src["target_size"].IsScalar() || src["target_size"].IsSequence());
+    if (src["target_size"].IsScalar()) {
+      (*dst)["transforms"]["Padding"]["width"] = src["target_size"].as<int>();
+      (*dst)["transforms"]["Padding"]["height"] = src["target_size"].as<int>();
+    } else {
+      std::vector<int> target_size = src["target_size"].as<std::vector<int>>();
+      (*dst)["transforms"]["Padding"]["width"] = target_size[0];
+      (*dst)["transforms"]["Padding"]["height"] = target_size[1];
+    }
+  } else if (src["size_divisor"].IsDefined()) {
+    (*dst)["transforms"]["Padding"]["stride"] =
+                        src["size_divisor"].as<int>();
+  } else {
+    std::cerr << "[Error] As least one of size_divisor/"
+              << "target_size must be defined for Padding"
+              << std::endl;
+    assert(false);
+  }
+
+  if (src["im_padding_value"].IsDefined()) {
+    (*dst)["transforms"]["Padding"]["im_padding_value"] =
+            src["im_padding_value"].as<std::vector<float>>();
+  }
+
+  if (src["pad_mode"].IsDefined()) {
+    if (src["pad_mode"].as<int>() != 0) {
+      std::cerr << "[Error] No support pad_mode :"
+              << src["pad_mode"].as<int>()
+              << std::endl;
+      assert(false);
+    }
+  }
+}
+
 void XPadding(const YAML::Node& src, YAML::Node* dst) {
   if (src["coarsest_stride"].IsDefined()) {
     (*dst)["transforms"]["Padding"]["stride"] =

+ 5 - 1
dygraph/deploy/cpp/model_deploy/paddlex/src/x_model.cpp

@@ -32,7 +32,11 @@ bool PaddleXModel::GenerateTransformsConfig(const YAML::Node& src) {
     } else if (op_name == "ResizeByLong") {
       XResizeByLong(op.begin()->second, &yaml_config_);
     } else if (op_name == "Padding") {
-      XPadding(op.begin()->second, &yaml_config_);
+      if (src["version"].as<std::string>() >= "2.0.0") {
+        XPaddingV2(op.begin()->second, &yaml_config_);
+      } else {
+        XPadding(op.begin()->second, &yaml_config_);
+      }
     } else if (op_name == "CenterCrop") {
       XCenterCrop(op.begin()->second, &yaml_config_);
     } else if (op_name == "Resize") {

+ 0 - 1
dygraph/examples/meter_reader/README.md

@@ -297,7 +297,6 @@ def predict(self,
             seg_batch_size=2):
     """检测图像中的表盘,而后分割出各表盘中的指针和刻度,对分割结果进行读数后处理后得到各表盘的读数。
 
-
         参数:
             img_file (str):待预测的图片路径。
             save_dir (str): 可视化结果的保存路径。

+ 0 - 1
dygraph/examples/meter_reader/reader_infer.py

@@ -530,7 +530,6 @@ class MeterReader:
                 seg_batch_size=2):
         """检测图像中的表盘,而后分割出各表盘中的指针和刻度,对分割结果进行读数后处理后得到各表盘的读数。
 
-
         参数:
             img_file (str):待预测的图片路径。
             save_dir (str): 可视化结果的保存路径。

+ 1 - 1
dygraph/examples/meter_reader/train_segmentation.py

@@ -48,7 +48,7 @@ model.train(
     num_epochs=20,
     train_dataset=train_dataset,
     train_batch_size=4,
-    #pretrain_weights='IMAGENET',
+    pretrain_weights='IMAGENET',
     eval_dataset=eval_dataset,
     learning_rate=0.1,
     save_dir='output/deeplabv3p_r50vd')

+ 58 - 13
dygraph/paddlex/cv/models/base.py

@@ -29,7 +29,7 @@ import paddlex
 from paddlex.cv.transforms import arrange_transforms
 from paddlex.utils import (seconds_to_hms, get_single_card_bs, dict2str,
                            get_pretrain_weights, load_pretrain_weights,
-                           SmoothedValue, TrainingStats,
+                           load_checkpoint, SmoothedValue, TrainingStats,
                            _get_shared_memory_size_in_M, EarlyStop)
 import paddlex.utils.logging as logging
 from .slim.prune import _pruner_eval_fn, _pruner_template_input, sensitive_prune
@@ -56,12 +56,16 @@ class BaseModel:
         self.pruning_ratios = None
         self.quantizer = None
         self.quant_config = None
+        self.fixed_input_shape = None
 
-    def net_initialize(self, pretrain_weights=None, save_dir='.'):
+    def net_initialize(self,
+                       pretrain_weights=None,
+                       save_dir='.',
+                       resume_checkpoint=None):
         if pretrain_weights is not None and \
-                not os.path.exists(pretrain_weights):
-            if not os.path.isdir(save_dir):
-                if os.path.exists(save_dir):
+                not osp.exists(pretrain_weights):
+            if not osp.isdir(save_dir):
+                if osp.exists(save_dir):
                     os.remove(save_dir)
                 os.makedirs(save_dir)
             if self.model_type == 'classifier':
@@ -77,6 +81,37 @@ class BaseModel:
         if pretrain_weights is not None:
             load_pretrain_weights(
                 self.net, pretrain_weights, model_name=self.model_name)
+        if resume_checkpoint is not None:
+            if not osp.exists(resume_checkpoint):
+                logging.error(
+                    "The checkpoint path {} to resume training from does not exist."
+                    .format(resume_checkpoint),
+                    exit=True)
+            if not osp.exists(osp.join(resume_checkpoint, 'model.pdparams')):
+                logging.error(
+                    "Model parameter state dictionary file 'model.pdparams' "
+                    "not found under given checkpoint path {}".format(
+                        resume_checkpoint),
+                    exit=True)
+            if not osp.exists(osp.join(resume_checkpoint, 'model.pdopt')):
+                logging.error(
+                    "Optimizer state dictionary file 'model.pdparams' "
+                    "not found under given checkpoint path {}".format(
+                        resume_checkpoint),
+                    exit=True)
+            if not osp.exists(osp.join(resume_checkpoint, 'model.yml')):
+                logging.error(
+                    "'model.yml' not found under given checkpoint path {}".
+                    format(resume_checkpoint),
+                    exit=True)
+            with open(osp.join(resume_checkpoint, "model.yml")) as f:
+                info = yaml.load(f.read(), Loader=yaml.Loader)
+                self.completed_epochs = info['completed_epochs']
+            load_checkpoint(
+                self.net,
+                self.optimizer,
+                model_name=self.model_name,
+                checkpoint=resume_checkpoint)
 
     def get_model_info(self):
         info = dict()
@@ -96,6 +131,7 @@ class BaseModel:
 
         info['_Attributes']['num_classes'] = self.num_classes
         info['_Attributes']['labels'] = self.labels
+        info['_Attributes']['fixed_input_shape'] = self.fixed_input_shape
 
         try:
             primary_metric_key = list(self.eval_metrics.keys())[0]
@@ -339,7 +375,7 @@ class BaseModel:
             # 每间隔save_interval_epochs, 在验证集上评估和对模型进行保存
             if ema is not None:
                 weight = self.net.state_dict()
-                self.net.set_dict(ema.apply())
+                self.net.set_state_dict(ema.apply())
             eval_epoch_tic = time.time()
             if (i + 1) % save_interval_epochs == 0 or i == num_epochs - 1:
                 if eval_dataset is not None and eval_dataset.num_samples > 0:
@@ -374,7 +410,7 @@ class BaseModel:
                         if earlystop(current_accuracy):
                             break
             if ema is not None:
-                self.net.set_dict(weight)
+                self.net.set_state_dict(weight)
 
     def analyze_sensitivity(self,
                             dataset,
@@ -475,12 +511,21 @@ class BaseModel:
                 # Types of layers that will be quantized.
                 'quantizable_layer_type': ['Conv2D', 'Linear']
             }
-        self.quant_config = quant_config
-        self.quantizer = QAT(config=self.quant_config)
-        logging.info("Preparing the model for quantization-aware training...")
-        self.quantizer.quantize(self.net)
-        logging.info("Model is ready for quantization-aware training.")
-        self.status = 'Quantized'
+        if self.status != 'Quantized':
+            self.quant_config = quant_config
+            self.quantizer = QAT(config=self.quant_config)
+            logging.info(
+                "Preparing the model for quantization-aware training...")
+            self.quantizer.quantize(self.net)
+            logging.info("Model is ready for quantization-aware training.")
+            self.status = 'Quantized'
+        elif quant_config != self.quant_config:
+            logging.error(
+                "The model has been quantized with the following quant_config: {}."
+                "Doing quantization-aware training with a quantized model "
+                "using a different configuration is not supported."
+                .format(self.quant_config),
+                exit=True)
 
     def _export_inference_model(self, save_dir, image_shape=None):
         save_dir = osp.join(save_dir, 'inference_model')

+ 25 - 6
dygraph/paddlex/cv/models/classifier.py

@@ -82,10 +82,11 @@ class BaseClassifier(BaseModel):
     def _get_test_inputs(self, image_shape):
         if image_shape is not None:
             if len(image_shape) == 2:
-                image_shape = [None, 3] + image_shape
+                image_shape = [1, 3] + image_shape
             self._fix_transforms_shape(image_shape[-2:])
         else:
             image_shape = [None, 3, -1, -1]
+        self.fixed_input_shape = image_shape
         input_spec = [
             InputSpec(
                 shape=image_shape, name='image', dtype='float32')
@@ -191,7 +192,8 @@ class BaseClassifier(BaseModel):
               lr_decay_gamma=0.1,
               early_stop=False,
               early_stop_patience=5,
-              use_vdl=True):
+              use_vdl=True,
+              resume_checkpoint=None):
         """
         Train the model.
         Args:
@@ -206,7 +208,9 @@ class BaseClassifier(BaseModel):
             log_interval_steps(int, optional): Step interval for printing training information. Defaults to 10.
             save_dir(str, optional): Directory to save the model. Defaults to 'output'.
             pretrain_weights(str or None, optional):
-                None or name/path of pretrained weights. If None, no pretrained weights will be loaded. Defaults to 'IMAGENET'.
+                None or name/path of pretrained weights. If None, no pretrained weights will be loaded.
+                At most one of `resume_checkpoint` and `pretrain_weights` can be set simultaneously.
+                Defaults to 'IMAGENET'.
             learning_rate(float, optional): Learning rate for training. Defaults to .025.
             warmup_steps(int, optional): The number of steps of warm-up training. Defaults to 0.
             warmup_start_lr(float, optional): Start learning rate of warm-up training. Defaults to 0..
@@ -216,8 +220,15 @@ class BaseClassifier(BaseModel):
             early_stop(bool, optional): Whether to adopt early stop strategy. Defaults to False.
             early_stop_patience(int, optional): Early stop patience. Defaults to 5.
             use_vdl(bool, optional): Whether to use VisualDL to monitor the training process. Defaults to True.
+            resume_checkpoint(str or None, optional): The path of the checkpoint to resume training from.
+                If None, no training checkpoint will be resumed. At most one of `resume_checkpoint` and
+                `pretrain_weights` can be set simultaneously. Defaults to None.
 
         """
+        if pretrain_weights is not None and resume_checkpoint is not None:
+            logging.error(
+                "pretrain_weights and resume_checkpoint cannot be set simultaneously.",
+                exit=True)
         self.labels = train_dataset.labels
 
         # build optimizer if not defined
@@ -252,7 +263,9 @@ class BaseClassifier(BaseModel):
                     exit=True)
         pretrained_dir = osp.join(save_dir, 'pretrain')
         self.net_initialize(
-            pretrain_weights=pretrain_weights, save_dir=pretrained_dir)
+            pretrain_weights=pretrain_weights,
+            save_dir=pretrained_dir,
+            resume_checkpoint=resume_checkpoint)
 
         # start train loop
         self.train_loop(
@@ -284,6 +297,7 @@ class BaseClassifier(BaseModel):
                           early_stop=False,
                           early_stop_patience=5,
                           use_vdl=True,
+                          resume_checkpoint=None,
                           quant_config=None):
         """
         Quantization-aware training.
@@ -309,6 +323,8 @@ class BaseClassifier(BaseModel):
             use_vdl(bool, optional): Whether to use VisualDL to monitor the training process. Defaults to True.
             quant_config(dict or None, optional): Quantization configuration. If None, a default rule of thumb
                 configuration will be used. Defaults to None.
+            resume_checkpoint(str or None, optional): The path of the checkpoint to resume quantization-aware training
+                from. If None, no training checkpoint will be resumed. Defaults to None.
 
         """
         self._prepare_qat(quant_config)
@@ -329,7 +345,8 @@ class BaseClassifier(BaseModel):
             lr_decay_gamma=lr_decay_gamma,
             early_stop=early_stop,
             early_stop_patience=early_stop_patience,
-            use_vdl=use_vdl)
+            use_vdl=use_vdl,
+            resume_checkpoint=resume_checkpoint)
 
     def evaluate(self, eval_dataset, batch_size=1, return_details=False):
         """
@@ -554,7 +571,7 @@ class AlexNet(BaseClassifier):
                 'Please check image shape after transforms is [3, 224, 224], if not, fixed_input_shape '
                 + 'should be specified manually.')
         self._fix_transforms_shape(image_shape[-2:])
-
+        self.fixed_input_shape = image_shape
         input_spec = [
             InputSpec(
                 shape=image_shape, name='image', dtype='float32')
@@ -762,6 +779,7 @@ class ShuffleNetV2(BaseClassifier):
                 'Please check image shape after transforms is [3, 224, 224], if not, fixed_input_shape '
                 + 'should be specified manually.')
         self._fix_transforms_shape(image_shape[-2:])
+        self.fixed_input_shape = image_shape
         input_spec = [
             InputSpec(
                 shape=image_shape, name='image', dtype='float32')
@@ -788,6 +806,7 @@ class ShuffleNetV2_swish(BaseClassifier):
                 'Please check image shape after transforms is [3, 224, 224], if not, fixed_input_shape '
                 + 'should be specified manually.')
         self._fix_transforms_shape(image_shape[-2:])
+        self.fixed_input_shape = image_shape
         input_spec = [
             InputSpec(
                 shape=image_shape, name='image', dtype='float32')

+ 25 - 23
dygraph/paddlex/cv/models/detector.py

@@ -75,7 +75,7 @@ class BaseDetector(BaseModel):
 
     def _check_image_shape(self, image_shape):
         if len(image_shape) == 2:
-            image_shape = [None, 3] + image_shape
+            image_shape = [1, 3] + image_shape
             if image_shape[-2] % 32 > 0 or image_shape[-1] % 32 > 0:
                 raise Exception(
                     "Height and width in fixed_input_shape must be a multiple of 32, but received {}.".
@@ -88,6 +88,7 @@ class BaseDetector(BaseModel):
             self._fix_transforms_shape(image_shape[-2:])
         else:
             image_shape = [None, 3, -1, -1]
+        self.fixed_input_shape = image_shape
 
         return self._define_input_spec(image_shape)
 
@@ -158,7 +159,8 @@ class BaseDetector(BaseModel):
               use_ema=False,
               early_stop=False,
               early_stop_patience=5,
-              use_vdl=True):
+              use_vdl=True,
+              resume_checkpoint=None):
         """
         Train the model.
         Args:
@@ -185,8 +187,15 @@ class BaseDetector(BaseModel):
             early_stop(bool, optional): Whether to adopt early stop strategy. Defaults to False.
             early_stop_patience(int, optional): Early stop patience. Defaults to 5.
             use_vdl(bool, optional): Whether to use VisualDL to monitor the training process. Defaults to True.
+            resume_checkpoint(str or None, optional): The path of the checkpoint to resume training from.
+                If None, no training checkpoint will be resumed. At most one of `resume_checkpoint` and
+                `pretrain_weights` can be set simultaneously. Defaults to None.
 
         """
+        if pretrain_weights is not None and resume_checkpoint is not None:
+            logging.error(
+                "pretrain_weights and resume_checkpoint cannot be set simultaneously.",
+                exit=True)
         if train_dataset.__class__.__name__ == 'VOCDetection':
             train_dataset.data_fields = {
                 'im_id', 'image_shape', 'image', 'gt_bbox', 'gt_class',
@@ -253,7 +262,9 @@ class BaseDetector(BaseModel):
                     exit=True)
         pretrained_dir = osp.join(save_dir, 'pretrain')
         self.net_initialize(
-            pretrain_weights=pretrain_weights, save_dir=pretrained_dir)
+            pretrain_weights=pretrain_weights,
+            save_dir=pretrained_dir,
+            resume_checkpoint=resume_checkpoint)
 
         if use_ema:
             ema = ExponentialMovingAverage(
@@ -293,6 +304,7 @@ class BaseDetector(BaseModel):
                           early_stop=False,
                           early_stop_patience=5,
                           use_vdl=True,
+                          resume_checkpoint=None,
                           quant_config=None):
         """
         Quantization-aware training.
@@ -320,6 +332,8 @@ class BaseDetector(BaseModel):
             use_vdl(bool, optional): Whether to use VisualDL to monitor the training process. Defaults to True.
             quant_config(dict or None, optional): Quantization configuration. If None, a default rule of thumb
                 configuration will be used. Defaults to None.
+            resume_checkpoint(str or None, optional): The path of the checkpoint to resume quantization-aware training
+                from. If None, no training checkpoint will be resumed. Defaults to None.
 
         """
         self._prepare_qat(quant_config)
@@ -342,7 +356,8 @@ class BaseDetector(BaseModel):
             use_ema=use_ema,
             early_stop=early_stop,
             early_stop_patience=early_stop_patience,
-            use_vdl=use_vdl)
+            use_vdl=use_vdl,
+            resume_checkpoint=resume_checkpoint)
 
     def evaluate(self,
                  eval_dataset,
@@ -1020,6 +1035,7 @@ class FasterRCNN(BaseDetector):
                 self.test_transforms.transforms.append(
                     Padding(im_padding_value=[0., 0., 0.]))
 
+        self.fixed_input_shape = image_shape
         return self._define_input_spec(image_shape)
 
 
@@ -1414,14 +1430,10 @@ class PPYOLOv2(YOLOv3):
 
     def _get_test_inputs(self, image_shape):
         if image_shape is not None:
-            if len(image_shape) == 2:
-                image_shape = [None, 3] + image_shape
-            if image_shape[-2] % 32 > 0 or image_shape[-1] % 32 > 0:
-                raise Exception(
-                    "Height and width in fixed_input_shape must be a multiple of 32, but recieved is {}.".
-                    format(image_shape[-2:]))
+            image_shape = self._check_image_shape(image_shape)
             self._fix_transforms_shape(image_shape[-2:])
         else:
+            image_shape = [None, 3, 608, 608]
             logging.warning(
                 '[Important!!!] When exporting inference model for {},'.format(
                     self.__class__.__name__) +
@@ -1429,20 +1441,9 @@ class PPYOLOv2(YOLOv3):
                 +
                 'Please check image shape after transforms is [3, 608, 608], if not, fixed_input_shape '
                 + 'should be specified manually.')
-            image_shape = [None, 3, 608, 608]
-
-        input_spec = [{
-            "image": InputSpec(
-                shape=image_shape, name='image', dtype='float32'),
-            "im_shape": InputSpec(
-                shape=[image_shape[0], 2], name='im_shape', dtype='float32'),
-            "scale_factor": InputSpec(
-                shape=[image_shape[0], 2],
-                name='scale_factor',
-                dtype='float32')
-        }]
 
-        return input_spec
+        self.fixed_input_shape = image_shape
+        return self._define_input_spec(image_shape)
 
 
 class MaskRCNN(BaseDetector):
@@ -1741,5 +1742,6 @@ class MaskRCNN(BaseDetector):
             if self.with_fpn:
                 self.test_transforms.transforms.append(
                     Padding(im_padding_value=[0., 0., 0.]))
+        self.fixed_input_shape = image_shape
 
         return self._define_input_spec(image_shape)

+ 2 - 2
dygraph/paddlex/cv/models/load_model.py

@@ -107,8 +107,8 @@ def load_model(model_dir):
         if status == 'Quantized':
             with open(osp.join(model_dir, "quant.yml")) as f:
                 quant_info = yaml.load(f.read(), Loader=yaml.Loader)
-                quant_config = quant_info['quant_config']
-                model.quantizer = paddleslim.QAT(quant_config)
+                model.quant_config = quant_info['quant_config']
+                model.quantizer = paddleslim.QAT(model.quant_config)
                 model.quantizer.quantize(model.net)
 
         if status == 'Infer':

+ 19 - 4
dygraph/paddlex/cv/models/segmenter.py

@@ -82,10 +82,11 @@ class BaseSegmenter(BaseModel):
     def _get_test_inputs(self, image_shape):
         if image_shape is not None:
             if len(image_shape) == 2:
-                image_shape = [None, 3] + image_shape
+                image_shape = [1, 3] + image_shape
             self._fix_transforms_shape(image_shape[-2:])
         else:
             image_shape = [None, 3, -1, -1]
+        self.fixed_input_shape = image_shape
         input_spec = [
             InputSpec(
                 shape=image_shape, name='image', dtype='float32')
@@ -193,7 +194,8 @@ class BaseSegmenter(BaseModel):
               lr_decay_power=0.9,
               early_stop=False,
               early_stop_patience=5,
-              use_vdl=True):
+              use_vdl=True,
+              resume_checkpoint=None):
         """
         Train the model.
         Args:
@@ -214,8 +216,15 @@ class BaseSegmenter(BaseModel):
             early_stop(bool, optional): Whether to adopt early stop strategy. Defaults to False.
             early_stop_patience(int, optional): Early stop patience. Defaults to 5.
             use_vdl(bool, optional): Whether to use VisualDL to monitor the training process. Defaults to True.
+            resume_checkpoint(str or None, optional): The path of the checkpoint to resume training from.
+                If None, no training checkpoint will be resumed. At most one of `resume_checkpoint` and
+                `pretrain_weights` can be set simultaneously. Defaults to None.
 
         """
+        if pretrain_weights is not None and resume_checkpoint is not None:
+            logging.error(
+                "pretrain_weights and resume_checkpoint cannot be set simultaneously.",
+                exit=True)
         self.labels = train_dataset.labels
         if self.losses is None:
             self.losses = self.default_loss()
@@ -248,7 +257,9 @@ class BaseSegmenter(BaseModel):
                     exit=True)
         pretrained_dir = osp.join(save_dir, 'pretrain')
         self.net_initialize(
-            pretrain_weights=pretrain_weights, save_dir=pretrained_dir)
+            pretrain_weights=pretrain_weights,
+            save_dir=pretrained_dir,
+            resume_checkpoint=resume_checkpoint)
 
         self.train_loop(
             num_epochs=num_epochs,
@@ -276,6 +287,7 @@ class BaseSegmenter(BaseModel):
                           early_stop=False,
                           early_stop_patience=5,
                           use_vdl=True,
+                          resume_checkpoint=None,
                           quant_config=None):
         """
         Quantization-aware training.
@@ -297,6 +309,8 @@ class BaseSegmenter(BaseModel):
             use_vdl(bool, optional): Whether to use VisualDL to monitor the training process. Defaults to True.
             quant_config(dict or None, optional): Quantization configuration. If None, a default rule of thumb
                 configuration will be used. Defaults to None.
+            resume_checkpoint(str or None, optional): The path of the checkpoint to resume quantization-aware training
+                from. If None, no training checkpoint will be resumed. Defaults to None.
 
         """
         self._prepare_qat(quant_config)
@@ -314,7 +328,8 @@ class BaseSegmenter(BaseModel):
             lr_decay_power=lr_decay_power,
             early_stop=early_stop,
             early_stop_patience=early_stop_patience,
-            use_vdl=use_vdl)
+            use_vdl=use_vdl,
+            resume_checkpoint=resume_checkpoint)
 
     def evaluate(self, eval_dataset, batch_size=1, return_details=False):
         """

+ 2 - 2
dygraph/paddlex/cv/transforms/operators.py

@@ -318,7 +318,7 @@ class RandomResize(Transform):
     Attention:If interp is 'RANDOM', the interpolation method will be chose randomly.
 
     Args:
-        target_sizes (List[int], List[list or tuple] or Tuple[lsit or tuple]):
+        target_sizes (List[int], List[list or tuple] or Tuple[list or tuple]):
             Multiple target sizes, each target size is an int or list/tuple.
         interp ({'NEAREST', 'LINEAR', 'CUBIC', 'AREA', 'LANCZOS4', 'RANDOM'}, optional):
             Interpolation method of resize. Defaults to 'LINEAR'.
@@ -943,7 +943,7 @@ class Padding(Transform):
             assert offsets, 'if pad_mode is -1, offsets should not be None'
 
         self.target_size = target_size
-        self.coarsest_stride = size_divisor
+        self.size_divisor = size_divisor
         self.pad_mode = pad_mode
         self.offsets = offsets
         self.im_padding_value = im_padding_value

+ 1 - 1
dygraph/paddlex/utils/__init__.py

@@ -17,7 +17,7 @@ from . import utils
 from .utils import (seconds_to_hms, get_encoding, get_single_card_bs, dict2str,
                     EarlyStop, path_normalization, is_pic, MyEncoder,
                     DisablePrint)
-from .checkpoint import get_pretrain_weights, load_pretrain_weights
+from .checkpoint import get_pretrain_weights, load_pretrain_weights, load_checkpoint
 from .env import get_environ_info, get_num_workers, init_parallel_env
 from .download import download_and_decompress, decompress
 from .stats import SmoothedValue, TrainingStats

+ 19 - 1
dygraph/paddlex/utils/checkpoint.py

@@ -394,7 +394,7 @@ def load_pretrain_weights(model, pretrain_weights=None, model_name=None):
                 else:
                     model_state_dict[k] = para_state_dict[k]
                     num_params_loaded += 1
-            model.set_dict(model_state_dict)
+            model.set_state_dict(model_state_dict)
             logging.info("There are {}/{} variables loaded into {}.".format(
                 num_params_loaded, len(model_state_dict), model_name))
         else:
@@ -404,3 +404,21 @@ def load_pretrain_weights(model, pretrain_weights=None, model_name=None):
         logging.info(
             'No pretrained model to load, {} will be trained from scratch.'.
             format(model_name))
+
+
+def load_optimizer(optimizer, state_dict_path):
+    logging.info("Loading optimizer from {}".format(state_dict_path))
+    optim_state_dict = paddle.load(state_dict_path)
+    if 'last_epoch' in optim_state_dict:
+        optim_state_dict.pop('last_epoch')
+    optimizer.set_state_dict(optim_state_dict)
+
+
+def load_checkpoint(model, optimizer, model_name, checkpoint):
+    logging.info("Loading checkpoint from {}".format(checkpoint))
+    load_pretrain_weights(
+        model,
+        pretrain_weights=osp.join(checkpoint, 'model.pdparams'),
+        model_name=model_name)
+    load_optimizer(
+        optimizer, state_dict_path=osp.join(checkpoint, "model.pdopt"))