Browse Source

Merge branch 'develop' of github.com:PaddlePaddle/PaddleX into develop

jack 5 years ago
parent
commit
3b5012beef

+ 0 - 1
docs/paddlex_gui/download.md

@@ -25,4 +25,3 @@
   * **硬盘空间**:建议SSD剩余空间1T以上(非必须)  
   * **硬盘空间**:建议SSD剩余空间1T以上(非必须)  
 
 
 ***注:PaddleX在Windows及Mac OS系统只支持单卡模型。Windows系统暂不支持NCCL。***
 ***注:PaddleX在Windows及Mac OS系统只支持单卡模型。Windows系统暂不支持NCCL。***
-

+ 6 - 6
docs/paddlex_gui/how_to_use.md

@@ -42,7 +42,7 @@ PaddleX GUI是一个应用PaddleX实现的一个图形化开发客户端产品
 
 
 在开始模型训练前,您需要根据不同的任务类型,将数据标注为相应的格式。目前PaddleX支持【图像分类】、【目标检测】、【语义分割】、【实例分割】四种任务类型。不同类型任务的数据处理方式可查看[数据标注方式](https://paddlex.readthedocs.io/zh_CN/latest/appendix/datasets.html)。
 在开始模型训练前,您需要根据不同的任务类型,将数据标注为相应的格式。目前PaddleX支持【图像分类】、【目标检测】、【语义分割】、【实例分割】四种任务类型。不同类型任务的数据处理方式可查看[数据标注方式](https://paddlex.readthedocs.io/zh_CN/latest/appendix/datasets.html)。
 
 
- 
+
 
 
 **第二步:导入我的数据集**
 **第二步:导入我的数据集**
 
 
@@ -116,26 +116,26 @@ PaddleX GUI是一个应用PaddleX实现的一个图形化开发客户端产品
 
 
    PaddleX完全采用您本地的硬件进行计算,深度学习任务确实对算力要求较高,为了使您能快速体验应用PaddleX进行开发,我们适配了CPU硬件,但强烈建议您使用GPU以提升训练速度和开发体验。
    PaddleX完全采用您本地的硬件进行计算,深度学习任务确实对算力要求较高,为了使您能快速体验应用PaddleX进行开发,我们适配了CPU硬件,但强烈建议您使用GPU以提升训练速度和开发体验。
 
 
-   
+
 
 
 2. **我可以在服务器或云平台上部署PaddleX么?**
 2. **我可以在服务器或云平台上部署PaddleX么?**
 
 
    PaddleX GUI是一个适配本地单机安装的客户端,无法在服务器上直接进行部署,您可以直接使用PaddleX API,或采用飞桨核心框架进行服务器上的部署。如果您希望使用公有算力,强烈建议您尝试飞桨产品系列中的 [EasyDL](https://ai.baidu.com/easydl/) 或 [AI Studio](https://aistudio.baidu.com/aistudio/index)进行开发。
    PaddleX GUI是一个适配本地单机安装的客户端,无法在服务器上直接进行部署,您可以直接使用PaddleX API,或采用飞桨核心框架进行服务器上的部署。如果您希望使用公有算力,强烈建议您尝试飞桨产品系列中的 [EasyDL](https://ai.baidu.com/easydl/) 或 [AI Studio](https://aistudio.baidu.com/aistudio/index)进行开发。
 
 
-   
+
 
 
 3. **PaddleX支持EasyData标注的数据吗?**
 3. **PaddleX支持EasyData标注的数据吗?**
 
 
    支持,PaddleX可顺畅读取EasyData标注的数据。但当前版本的PaddleX GUI暂时无法支持直接导入EasyData数据格式,您可以参照文档,将[数据集进行转换](https://paddlex.readthedocs.io/zh_CN/latest/appendix/how_to_convert_dataset.html)再导入PaddleX GUI进行后续开发。
    支持,PaddleX可顺畅读取EasyData标注的数据。但当前版本的PaddleX GUI暂时无法支持直接导入EasyData数据格式,您可以参照文档,将[数据集进行转换](https://paddlex.readthedocs.io/zh_CN/latest/appendix/how_to_convert_dataset.html)再导入PaddleX GUI进行后续开发。
    同时,我们也在紧密开发PaddleX GUI可直接导入EasyData数据格式的功能。
    同时,我们也在紧密开发PaddleX GUI可直接导入EasyData数据格式的功能。
-   
-   
+
+
 
 
 4. **为什么模型裁剪分析耗时这么长?**
 4. **为什么模型裁剪分析耗时这么长?**
 
 
    模型裁剪分析过程是对模型各卷积层的敏感度信息进行分析,根据各参数对模型效果的影响进行不同比例的裁剪。此过程需要重复多次直至FLOPS满足要求,最后再进行精调训练获得最终裁剪后的模型,因此耗时较长。有关模型裁剪的原理,可参见文档[剪裁原理介绍](https://paddlepaddle.github.io/PaddleSlim/algo/algo.html#2-%E5%8D%B7%E7%A7%AF%E6%A0%B8%E5%89%AA%E8%A3%81%E5%8E%9F%E7%90%86)
    模型裁剪分析过程是对模型各卷积层的敏感度信息进行分析,根据各参数对模型效果的影响进行不同比例的裁剪。此过程需要重复多次直至FLOPS满足要求,最后再进行精调训练获得最终裁剪后的模型,因此耗时较长。有关模型裁剪的原理,可参见文档[剪裁原理介绍](https://paddlepaddle.github.io/PaddleSlim/algo/algo.html#2-%E5%8D%B7%E7%A7%AF%E6%A0%B8%E5%89%AA%E8%A3%81%E5%8E%9F%E7%90%86)
 
 
-   
+
 
 
 5. **如何调用后端代码?**
 5. **如何调用后端代码?**
 
 

+ 12 - 12
docs/tutorials/datasets.md

@@ -224,7 +224,7 @@ labelB
 └--labels.txt  # 标签列表文件
 └--labels.txt  # 标签列表文件
 
 
 ```
 ```
-其中,图像文件名应与json文件名一一对应。   
+其中,图像文件名应与json文件名一一对应。  
 
 
 每个json文件存储于`labels`相关的信息。如下所示:
 每个json文件存储于`labels`相关的信息。如下所示:
 ```
 ```
@@ -269,17 +269,17 @@ labelB
 └--labels.txt  # 标签列表文件
 └--labels.txt  # 标签列表文件
 
 
 ```
 ```
-其中,图像文件名应与json文件名一一对应。   
+其中,图像文件名应与json文件名一一对应。  
 
 
 每个json文件存储于`labels`相关的信息。如下所示:
 每个json文件存储于`labels`相关的信息。如下所示:
 ```
 ```
-"labels": [{"y1": 18, "x2": 883, "x1": 371, "y2": 404, "name": "labelA", 
-            "mask": "kVfc0`0Zg0<F7J7I5L5K4L4L4L3N3L3N3L3N2N3M2N2N2N2N2N2N1O2N2O1N2N1O2O1N101N1O2O1N101N10001N101N10001N10001O0O10001O000O100000001O0000000000000000000000O1000001O00000O101O000O101O0O101O0O2O0O101O0O2O0O2N2O0O2O0O2N2O1N1O2N2N2O1N2N2N2N2N2N2M3N3M2M4M2M4M3L4L4L4K6K5J7H9E\\iY1"}, 
+"labels": [{"y1": 18, "x2": 883, "x1": 371, "y2": 404, "name": "labelA",
+            "mask": "kVfc0`0Zg0<F7J7I5L5K4L4L4L3N3L3N3L3N2N3M2N2N2N2N2N2N1O2N2O1N2N1O2O1N101N1O2O1N101N10001N101N10001N10001O0O10001O000O100000001O0000000000000000000000O1000001O00000O101O000O101O0O101O0O2O0O101O0O2O0O2N2O0O2O0O2N2O1N1O2N2N2O1N2N2N2N2N2N2M3N3M2M4M2M4M3L4L4L4K6K5J7H9E\\iY1"},
            {"y1": 314, "x2": 666, "x1": 227, "y2": 676, "name": "labelB",
            {"y1": 314, "x2": 666, "x1": 227, "y2": 676, "name": "labelB",
-            "mask": "mdQ8g0Tg0:G8I6K5J5L4L4L4L4M2M4M2M4M2N2N2N3L3N2N2N2N2O1N1O2N2N2O1N1O2N2O0O2O1N1O2O0O2O0O2O001N100O2O000O2O000O2O00000O2O000000001N100000000000000000000000000000000001O0O100000001O0O10001N10001O0O101N10001N101N101N101N101N2O0O2N2O0O2N2N2O0O2N2N2N2N2N2N2N2N2N3L3N2N3L3N3L4M2M4L4L5J5L5J7H8H;BUcd<"}, 
+            "mask": "mdQ8g0Tg0:G8I6K5J5L4L4L4L4M2M4M2M4M2N2N2N3L3N2N2N2N2O1N1O2N2N2O1N1O2N2O0O2O1N1O2O0O2O0O2O001N100O2O000O2O000O2O00000O2O000000001N100000000000000000000000000000000001O0O100000001O0O10001N10001O0O101N10001N101N101N101N101N2O0O2N2O0O2N2N2O0O2N2N2N2N2N2N2N2N2N3L3N2N3L3N3L4M2M4L4L5J5L5J7H8H;BUcd<"},
            ...]}
            ...]}
 ```
 ```
-其中,list中的每个元素代表一个标注信息,标注信息中字段的含义如下所示: 
+其中,list中的每个元素代表一个标注信息,标注信息中字段的含义如下所示:
 
 
 | 字段名 | 含义 | 数据类型 | 备注 |
 | 字段名 | 含义 | 数据类型 | 备注 |
 |:--------|:------------|------|:-----|
 |:--------|:------------|------|:-----|
@@ -327,17 +327,17 @@ labelB
 └--labels.txt  # 标签列表文件
 └--labels.txt  # 标签列表文件
 
 
 ```
 ```
-其中,图像文件名应与json文件名一一对应。   
+其中,图像文件名应与json文件名一一对应。  
 
 
 每个json文件存储于`labels`相关的信息。如下所示:
 每个json文件存储于`labels`相关的信息。如下所示:
 ```
 ```
-"labels": [{"y1": 18, "x2": 883, "x1": 371, "y2": 404, "name": "labelA", 
-            "mask": "kVfc0`0Zg0<F7J7I5L5K4L4L4L3N3L3N3L3N2N3M2N2N2N2N2N2N1O2N2O1N2N1O2O1N101N1O2O1N101N10001N101N10001N10001O0O10001O000O100000001O0000000000000000000000O1000001O00000O101O000O101O0O101O0O2O0O101O0O2O0O2N2O0O2O0O2N2O1N1O2N2N2O1N2N2N2N2N2N2M3N3M2M4M2M4M3L4L4L4K6K5J7H9E\\iY1"}, 
+"labels": [{"y1": 18, "x2": 883, "x1": 371, "y2": 404, "name": "labelA",
+            "mask": "kVfc0`0Zg0<F7J7I5L5K4L4L4L3N3L3N3L3N2N3M2N2N2N2N2N2N1O2N2O1N2N1O2O1N101N1O2O1N101N10001N101N10001N10001O0O10001O000O100000001O0000000000000000000000O1000001O00000O101O000O101O0O101O0O2O0O101O0O2O0O2N2O0O2O0O2N2O1N1O2N2N2O1N2N2N2N2N2N2M3N3M2M4M2M4M3L4L4L4K6K5J7H9E\\iY1"},
            {"y1": 314, "x2": 666, "x1": 227, "y2": 676, "name": "labelB",
            {"y1": 314, "x2": 666, "x1": 227, "y2": 676, "name": "labelB",
-            "mask": "mdQ8g0Tg0:G8I6K5J5L4L4L4L4M2M4M2M4M2N2N2N3L3N2N2N2N2O1N1O2N2N2O1N1O2N2O0O2O1N1O2O0O2O0O2O001N100O2O000O2O000O2O00000O2O000000001N100000000000000000000000000000000001O0O100000001O0O10001N10001O0O101N10001N101N101N101N101N2O0O2N2O0O2N2N2O0O2N2N2N2N2N2N2N2N2N3L3N2N3L3N3L4M2M4L4L5J5L5J7H8H;BUcd<"}, 
+            "mask": "mdQ8g0Tg0:G8I6K5J5L4L4L4L4M2M4M2M4M2N2N2N3L3N2N2N2N2O1N1O2N2N2O1N1O2N2O0O2O1N1O2O0O2O0O2O001N100O2O000O2O000O2O00000O2O000000001N100000000000000000000000000000000001O0O100000001O0O10001N10001O0O101N10001N101N101N101N101N2O0O2N2O0O2N2N2O0O2N2N2N2N2N2N2N2N2N3L3N2N3L3N3L4M2M4L4L5J5L5J7H8H;BUcd<"},
            ...]}
            ...]}
 ```
 ```
-其中,list中的每个元素代表一个标注信息,标注信息中字段的含义如下所示: 
+其中,list中的每个元素代表一个标注信息,标注信息中字段的含义如下所示:
 
 
 | 字段名 | 含义 | 数据类型 | 备注 |
 | 字段名 | 含义 | 数据类型 | 备注 |
 |:--------|:------------|------|:-----|
 |:--------|:------------|------|:-----|
@@ -363,4 +363,4 @@ labelB
 ```
 ```
 
 
 [点击这里](https://ai.baidu.com/easydata/),可以标注图像分类EasyDataSeg数据集。  
 [点击这里](https://ai.baidu.com/easydata/),可以标注图像分类EasyDataSeg数据集。  
-在PaddleX中,使用`paddlex.cv.datasets.EasyDataSeg`([API说明](./apis/datasets.html#easydataseg))加载分类数据集。
+在PaddleX中,使用`paddlex.cv.datasets.EasyDataSeg`([API说明](./apis/datasets.html#easydataseg))加载分类数据集。

+ 1 - 1
docs/tutorials/deploy/deploy_server/deploy_cpp/deploy_cpp_win_vs2019.md

@@ -49,7 +49,7 @@ PaddlePaddle C++ 预测库针对是否使用GPU、是否支持TensorRT、以及
 
 
 ### Step3: 安装配置OpenCV
 ### Step3: 安装配置OpenCV
 
 
-1. 在OpenCV官网下载适用于Windows平台的3.4.6版本, [下载地址](https://sourceforge.net/projects/opencvlibrary/files/3.4.6/opencv-3.4.6-vc14_vc15.exe/download)  
+1. 在OpenCV官网下载适用于Windows平台的3.4.6版本, [下载地址](https://bj.bcebos.com/paddleseg/deploy/opencv-3.4.6-vc14_vc15.exe)  
 2. 运行下载的可执行文件,将OpenCV解压至指定目录,例如`D:\projects\opencv`
 2. 运行下载的可执行文件,将OpenCV解压至指定目录,例如`D:\projects\opencv`
 3. 配置环境变量,如下流程所示  
 3. 配置环境变量,如下流程所示  
     - 我的电脑->属性->高级系统设置->环境变量
     - 我的电脑->属性->高级系统设置->环境变量

+ 1 - 1
docs/tutorials/deploy/upgrade_version.md

@@ -11,4 +11,4 @@
 ```
 ```
 paddlex --export_inference --model_dir=/path/to/low_version_model --save_dir=SSpath/to/high_version_model
 paddlex --export_inference --model_dir=/path/to/low_version_model --save_dir=SSpath/to/high_version_model
 ```
 ```
-`--model_dir`为版本号小于1.0.0的模型路径,可以是PaddleX训练过程保存的模型,也可以是导出为inference格式的模型。`--save_dir`为转换为高版本的模型,后续可用于多端部署。
+`--model_dir`为版本号小于1.0.0的模型路径,可以是PaddleX训练过程保存的模型,也可以是导出为inference格式的模型。`--save_dir`为转换为高版本的模型,后续可用于多端部署。

+ 5 - 2
paddlex/cv/datasets/voc.py

@@ -106,8 +106,11 @@ class VOCDetection(Dataset):
                     ct = int(tree.find('id').text)
                     ct = int(tree.find('id').text)
                     im_id = np.array([int(tree.find('id').text)])
                     im_id = np.array([int(tree.find('id').text)])
                 pattern = re.compile('<object>', re.IGNORECASE)
                 pattern = re.compile('<object>', re.IGNORECASE)
-                obj_tag = pattern.findall(
-                    str(ET.tostringlist(tree.getroot())))[0][1:-1]
+                obj_match = pattern.findall(
+                    str(ET.tostringlist(tree.getroot())))
+                if len(obj_match) == 0:
+                    continue
+                obj_tag = obj_match[0][1:-1]
                 objs = tree.findall(obj_tag)
                 objs = tree.findall(obj_tag)
                 pattern = re.compile('<size>', re.IGNORECASE)
                 pattern = re.compile('<size>', re.IGNORECASE)
                 size_tag = pattern.findall(
                 size_tag = pattern.findall(

+ 32 - 31
paddlex/cv/models/base.py

@@ -73,6 +73,7 @@ class BaseAPI:
         self.status = 'Normal'
         self.status = 'Normal'
         # 已完成迭代轮数,为恢复训练时的起始轮数
         # 已完成迭代轮数,为恢复训练时的起始轮数
         self.completed_epochs = 0
         self.completed_epochs = 0
+        self.scope = fluid.global_scope()
 
 
     def _get_single_card_bs(self, batch_size):
     def _get_single_card_bs(self, batch_size):
         if batch_size % len(self.places) == 0:
         if batch_size % len(self.places) == 0:
@@ -84,6 +85,10 @@ class BaseAPI:
                                 'place']))
                                 'place']))
 
 
     def build_program(self):
     def build_program(self):
+        if hasattr(paddlex, 'model_built') and paddlex.model_built:
+            logging.error(
+                "Function model.train() only can be called once in your code.")
+        paddlex.model_built = True
         # 构建训练网络
         # 构建训练网络
         self.train_inputs, self.train_outputs = self.build_net(mode='train')
         self.train_inputs, self.train_outputs = self.build_net(mode='train')
         self.train_prog = fluid.default_main_program()
         self.train_prog = fluid.default_main_program()
@@ -155,7 +160,7 @@ class BaseAPI:
             outputs=self.test_outputs,
             outputs=self.test_outputs,
             batch_size=batch_size,
             batch_size=batch_size,
             batch_nums=batch_num,
             batch_nums=batch_num,
-            scope=None,
+            scope=self.scope,
             algo='KL',
             algo='KL',
             quantizable_op_type=["conv2d", "depthwise_conv2d", "mul"],
             quantizable_op_type=["conv2d", "depthwise_conv2d", "mul"],
             is_full_quantize=False,
             is_full_quantize=False,
@@ -244,8 +249,8 @@ class BaseAPI:
             logging.info(
             logging.info(
                 "Load pretrain weights from {}.".format(pretrain_weights),
                 "Load pretrain weights from {}.".format(pretrain_weights),
                 use_color=True)
                 use_color=True)
-            paddlex.utils.utils.load_pretrain_weights(
-                self.exe, self.train_prog, pretrain_weights, fuse_bn)
+            paddlex.utils.utils.load_pretrain_weights(self.exe, self.train_prog,
+                                                      pretrain_weights, fuse_bn)
         # 进行裁剪
         # 进行裁剪
         if sensitivities_file is not None:
         if sensitivities_file is not None:
             import paddleslim
             import paddleslim
@@ -349,27 +354,26 @@ class BaseAPI:
         logging.info("Model saved in {}.".format(save_dir))
         logging.info("Model saved in {}.".format(save_dir))
 
 
     def export_inference_model(self, save_dir):
     def export_inference_model(self, save_dir):
-        test_input_names = [
-            var.name for var in list(self.test_inputs.values())
-        ]
+        test_input_names = [var.name for var in list(self.test_inputs.values())]
         test_outputs = list(self.test_outputs.values())
         test_outputs = list(self.test_outputs.values())
-        if self.__class__.__name__ == 'MaskRCNN':
-            from paddlex.utils.save import save_mask_inference_model
-            save_mask_inference_model(
-                dirname=save_dir,
-                executor=self.exe,
-                params_filename='__params__',
-                feeded_var_names=test_input_names,
-                target_vars=test_outputs,
-                main_program=self.test_prog)
-        else:
-            fluid.io.save_inference_model(
-                dirname=save_dir,
-                executor=self.exe,
-                params_filename='__params__',
-                feeded_var_names=test_input_names,
-                target_vars=test_outputs,
-                main_program=self.test_prog)
+        with fluid.scope_guard(self.scope):
+            if self.__class__.__name__ == 'MaskRCNN':
+                from paddlex.utils.save import save_mask_inference_model
+                save_mask_inference_model(
+                    dirname=save_dir,
+                    executor=self.exe,
+                    params_filename='__params__',
+                    feeded_var_names=test_input_names,
+                    target_vars=test_outputs,
+                    main_program=self.test_prog)
+            else:
+                fluid.io.save_inference_model(
+                    dirname=save_dir,
+                    executor=self.exe,
+                    params_filename='__params__',
+                    feeded_var_names=test_input_names,
+                    target_vars=test_outputs,
+                    main_program=self.test_prog)
         model_info = self.get_model_info()
         model_info = self.get_model_info()
         model_info['status'] = 'Infer'
         model_info['status'] = 'Infer'
 
 
@@ -388,8 +392,7 @@ class BaseAPI:
 
 
         # 模型保存成功的标志
         # 模型保存成功的标志
         open(osp.join(save_dir, '.success'), 'w').close()
         open(osp.join(save_dir, '.success'), 'w').close()
-        logging.info("Model for inference deploy saved in {}.".format(
-            save_dir))
+        logging.info("Model for inference deploy saved in {}.".format(save_dir))
 
 
     def train_loop(self,
     def train_loop(self,
                    num_epochs,
                    num_epochs,
@@ -513,13 +516,11 @@ class BaseAPI:
                         eta = ((num_epochs - i) * total_num_steps - step - 1
                         eta = ((num_epochs - i) * total_num_steps - step - 1
                                ) * avg_step_time
                                ) * avg_step_time
                     if time_eval_one_epoch is not None:
                     if time_eval_one_epoch is not None:
-                        eval_eta = (
-                            total_eval_times - i // save_interval_epochs
-                        ) * time_eval_one_epoch
+                        eval_eta = (total_eval_times - i // save_interval_epochs
+                                    ) * time_eval_one_epoch
                     else:
                     else:
-                        eval_eta = (
-                            total_eval_times - i // save_interval_epochs
-                        ) * total_num_steps_eval * avg_step_time
+                        eval_eta = (total_eval_times - i // save_interval_epochs
+                                    ) * total_num_steps_eval * avg_step_time
                     eta_str = seconds_to_hms(eta + eval_eta)
                     eta_str = seconds_to_hms(eta + eval_eta)
 
 
                     logging.info(
                     logging.info(

+ 17 - 13
paddlex/cv/models/classifier.py

@@ -1,11 +1,11 @@
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
-# 
+#
 # Licensed under the Apache License, Version 2.0 (the "License");
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 # You may obtain a copy of the License at
-# 
+#
 #     http://www.apache.org/licenses/LICENSE-2.0
 #     http://www.apache.org/licenses/LICENSE-2.0
-# 
+#
 # Unless required by applicable law or agreed to in writing, software
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -227,9 +227,10 @@ class BaseClassifier(BaseAPI):
         true_labels = list()
         true_labels = list()
         pred_scores = list()
         pred_scores = list()
         if not hasattr(self, 'parallel_test_prog'):
         if not hasattr(self, 'parallel_test_prog'):
-            self.parallel_test_prog = fluid.CompiledProgram(
-                self.test_prog).with_data_parallel(
-                    share_vars_from=self.parallel_train_prog)
+            with fluid.scope_guard(self.scope):
+                self.parallel_test_prog = fluid.CompiledProgram(
+                    self.test_prog).with_data_parallel(
+                        share_vars_from=self.parallel_train_prog)
         batch_size_each_gpu = self._get_single_card_bs(batch_size)
         batch_size_each_gpu = self._get_single_card_bs(batch_size)
         logging.info("Start to evaluating(total_samples={}, total_steps={})...".
         logging.info("Start to evaluating(total_samples={}, total_steps={})...".
                      format(eval_dataset.num_samples, total_steps))
                      format(eval_dataset.num_samples, total_steps))
@@ -242,9 +243,11 @@ class BaseClassifier(BaseAPI):
                 num_pad_samples = batch_size - num_samples
                 num_pad_samples = batch_size - num_samples
                 pad_images = np.tile(images[0:1], (num_pad_samples, 1, 1, 1))
                 pad_images = np.tile(images[0:1], (num_pad_samples, 1, 1, 1))
                 images = np.concatenate([images, pad_images])
                 images = np.concatenate([images, pad_images])
-            outputs = self.exe.run(self.parallel_test_prog,
-                                   feed={'image': images},
-                                   fetch_list=list(self.test_outputs.values()))
+            with fluid.scope_guard(self.scope):
+                outputs = self.exe.run(
+                    self.parallel_test_prog,
+                    feed={'image': images},
+                    fetch_list=list(self.test_outputs.values()))
             outputs = [outputs[0][:num_samples]]
             outputs = [outputs[0][:num_samples]]
             true_labels.extend(labels)
             true_labels.extend(labels)
             pred_scores.extend(outputs[0].tolist())
             pred_scores.extend(outputs[0].tolist())
@@ -286,10 +289,11 @@ class BaseClassifier(BaseAPI):
             self.arrange_transforms(
             self.arrange_transforms(
                 transforms=self.test_transforms, mode='test')
                 transforms=self.test_transforms, mode='test')
             im = self.test_transforms(img_file)
             im = self.test_transforms(img_file)
-        result = self.exe.run(self.test_prog,
-                              feed={'image': im},
-                              fetch_list=list(self.test_outputs.values()),
-                              use_program_cache=True)
+        with fluid.scope_guard(self.scope):
+            result = self.exe.run(self.test_prog,
+                                  feed={'image': im},
+                                  fetch_list=list(self.test_outputs.values()),
+                                  use_program_cache=True)
         pred_label = np.argsort(result[0][0])[::-1][:true_topk]
         pred_label = np.argsort(result[0][0])[::-1][:true_topk]
         res = [{
         res = [{
             'category_id': l,
             'category_id': l,

+ 21 - 19
paddlex/cv/models/deeplabv3p.py

@@ -1,11 +1,11 @@
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
-# 
+#
 # Licensed under the Apache License, Version 2.0 (the "License");
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 # You may obtain a copy of the License at
-# 
+#
 #     http://www.apache.org/licenses/LICENSE-2.0
 #     http://www.apache.org/licenses/LICENSE-2.0
-# 
+#
 # Unless required by applicable law or agreed to in writing, software
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -317,19 +317,18 @@ class DeepLabv3p(BaseAPI):
             tuple (metrics, eval_details):当return_details为True时,增加返回dict (eval_details),
             tuple (metrics, eval_details):当return_details为True时,增加返回dict (eval_details),
                 包含关键字:'confusion_matrix',表示评估的混淆矩阵。
                 包含关键字:'confusion_matrix',表示评估的混淆矩阵。
         """
         """
-        self.arrange_transforms(
-            transforms=eval_dataset.transforms, mode='eval')
+        self.arrange_transforms(transforms=eval_dataset.transforms, mode='eval')
         total_steps = math.ceil(eval_dataset.num_samples * 1.0 / batch_size)
         total_steps = math.ceil(eval_dataset.num_samples * 1.0 / batch_size)
         conf_mat = ConfusionMatrix(self.num_classes, streaming=True)
         conf_mat = ConfusionMatrix(self.num_classes, streaming=True)
         data_generator = eval_dataset.generator(
         data_generator = eval_dataset.generator(
             batch_size=batch_size, drop_last=False)
             batch_size=batch_size, drop_last=False)
         if not hasattr(self, 'parallel_test_prog'):
         if not hasattr(self, 'parallel_test_prog'):
-            self.parallel_test_prog = fluid.CompiledProgram(
-                self.test_prog).with_data_parallel(
-                    share_vars_from=self.parallel_train_prog)
-        logging.info(
-            "Start to evaluating(total_samples={}, total_steps={})...".format(
-                eval_dataset.num_samples, total_steps))
+            with fluid.scope_guard(self.scope):
+                self.parallel_test_prog = fluid.CompiledProgram(
+                    self.test_prog).with_data_parallel(
+                        share_vars_from=self.parallel_train_prog)
+        logging.info("Start to evaluating(total_samples={}, total_steps={})...".
+                     format(eval_dataset.num_samples, total_steps))
         for step, data in tqdm.tqdm(
         for step, data in tqdm.tqdm(
                 enumerate(data_generator()), total=total_steps):
                 enumerate(data_generator()), total=total_steps):
             images = np.array([d[0] for d in data])
             images = np.array([d[0] for d in data])
@@ -350,10 +349,12 @@ class DeepLabv3p(BaseAPI):
                 pad_images = np.tile(images[0:1], (num_pad_samples, 1, 1, 1))
                 pad_images = np.tile(images[0:1], (num_pad_samples, 1, 1, 1))
                 images = np.concatenate([images, pad_images])
                 images = np.concatenate([images, pad_images])
             feed_data = {'image': images}
             feed_data = {'image': images}
-            outputs = self.exe.run(self.parallel_test_prog,
-                                   feed=feed_data,
-                                   fetch_list=list(self.test_outputs.values()),
-                                   return_numpy=True)
+            with fluid.scope_guard(self.scope):
+                outputs = self.exe.run(
+                    self.parallel_test_prog,
+                    feed=feed_data,
+                    fetch_list=list(self.test_outputs.values()),
+                    return_numpy=True)
             pred = outputs[0]
             pred = outputs[0]
             if num_samples < batch_size:
             if num_samples < batch_size:
                 pred = pred[0:num_samples]
                 pred = pred[0:num_samples]
@@ -399,10 +400,11 @@ class DeepLabv3p(BaseAPI):
                 transforms=self.test_transforms, mode='test')
                 transforms=self.test_transforms, mode='test')
             im, im_info = self.test_transforms(im_file)
             im, im_info = self.test_transforms(im_file)
         im = np.expand_dims(im, axis=0)
         im = np.expand_dims(im, axis=0)
-        result = self.exe.run(self.test_prog,
-                              feed={'image': im},
-                              fetch_list=list(self.test_outputs.values()),
-                              use_program_cache=True)
+        with fluid.scope_guard(self.scope):
+            result = self.exe.run(self.test_prog,
+                                  feed={'image': im},
+                                  fetch_list=list(self.test_outputs.values()),
+                                  use_program_cache=True)
         pred = result[0]
         pred = result[0]
         pred = np.squeeze(pred).astype('uint8')
         pred = np.squeeze(pred).astype('uint8')
         logit = result[1]
         logit = result[1]

+ 19 - 16
paddlex/cv/models/faster_rcnn.py

@@ -1,11 +1,11 @@
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
-# 
+#
 # Licensed under the Apache License, Version 2.0 (the "License");
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 # You may obtain a copy of the License at
-# 
+#
 #     http://www.apache.org/licenses/LICENSE-2.0
 #     http://www.apache.org/licenses/LICENSE-2.0
-# 
+#
 # Unless required by applicable law or agreed to in writing, software
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -325,10 +325,12 @@ class FasterRCNN(BaseAPI):
                 'im_info': im_infos,
                 'im_info': im_infos,
                 'im_shape': im_shapes,
                 'im_shape': im_shapes,
             }
             }
-            outputs = self.exe.run(self.test_prog,
-                                   feed=[feed_data],
-                                   fetch_list=list(self.test_outputs.values()),
-                                   return_numpy=False)
+            with fluid.scope_guard(self.scope):
+                outputs = self.exe.run(
+                    self.test_prog,
+                    feed=[feed_data],
+                    fetch_list=list(self.test_outputs.values()),
+                    return_numpy=False)
             res = {
             res = {
                 'bbox': (np.array(outputs[0]),
                 'bbox': (np.array(outputs[0]),
                          outputs[0].recursive_sequence_lengths())
                          outputs[0].recursive_sequence_lengths())
@@ -388,15 +390,16 @@ class FasterRCNN(BaseAPI):
         im = np.expand_dims(im, axis=0)
         im = np.expand_dims(im, axis=0)
         im_resize_info = np.expand_dims(im_resize_info, axis=0)
         im_resize_info = np.expand_dims(im_resize_info, axis=0)
         im_shape = np.expand_dims(im_shape, axis=0)
         im_shape = np.expand_dims(im_shape, axis=0)
-        outputs = self.exe.run(self.test_prog,
-                               feed={
-                                   'image': im,
-                                   'im_info': im_resize_info,
-                                   'im_shape': im_shape
-                               },
-                               fetch_list=list(self.test_outputs.values()),
-                               return_numpy=False,
-                               use_program_cache=True)
+        with fluid.scope_guard(self.scope):
+            outputs = self.exe.run(self.test_prog,
+                                   feed={
+                                       'image': im,
+                                       'im_info': im_resize_info,
+                                       'im_shape': im_shape
+                                   },
+                                   fetch_list=list(self.test_outputs.values()),
+                                   return_numpy=False,
+                                   use_program_cache=True)
         res = {
         res = {
             k: (np.array(v), v.recursive_sequence_lengths())
             k: (np.array(v), v.recursive_sequence_lengths())
             for k, v in zip(list(self.test_outputs.keys()), outputs)
             for k, v in zip(list(self.test_outputs.keys()), outputs)

+ 36 - 32
paddlex/cv/models/load_model.py

@@ -24,6 +24,7 @@ import paddlex.utils.logging as logging
 
 
 
 
 def load_model(model_dir, fixed_input_shape=None):
 def load_model(model_dir, fixed_input_shape=None):
+    model_scope = fluid.Scope()
     if not osp.exists(osp.join(model_dir, "model.yml")):
     if not osp.exists(osp.join(model_dir, "model.yml")):
         raise Exception("There's not model.yml in {}".format(model_dir))
         raise Exception("There's not model.yml in {}".format(model_dir))
     with open(osp.join(model_dir, "model.yml")) as f:
     with open(osp.join(model_dir, "model.yml")) as f:
@@ -51,38 +52,40 @@ def load_model(model_dir, fixed_input_shape=None):
                              format(fixed_input_shape))
                              format(fixed_input_shape))
                 model.fixed_input_shape = fixed_input_shape
                 model.fixed_input_shape = fixed_input_shape
 
 
-    if status == "Normal" or \
-            status == "Prune" or status == "fluid.save":
-        startup_prog = fluid.Program()
-        model.test_prog = fluid.Program()
-        with fluid.program_guard(model.test_prog, startup_prog):
-            with fluid.unique_name.guard():
-                model.test_inputs, model.test_outputs = model.build_net(
-                    mode='test')
-        model.test_prog = model.test_prog.clone(for_test=True)
-        model.exe.run(startup_prog)
-        if status == "Prune":
-            from .slim.prune import update_program
-            model.test_prog = update_program(model.test_prog, model_dir,
-                                             model.places[0])
-        import pickle
-        with open(osp.join(model_dir, 'model.pdparams'), 'rb') as f:
-            load_dict = pickle.load(f)
-        fluid.io.set_program_state(model.test_prog, load_dict)
-
-    elif status == "Infer" or \
-            status == "Quant" or status == "fluid.save_inference_model":
-        [prog, input_names, outputs] = fluid.io.load_inference_model(
-            model_dir, model.exe, params_filename='__params__')
-        model.test_prog = prog
-        test_outputs_info = info['_ModelInputsOutputs']['test_outputs']
-        model.test_inputs = OrderedDict()
-        model.test_outputs = OrderedDict()
-        for name in input_names:
-            model.test_inputs[name] = model.test_prog.global_block().var(name)
-        for i, out in enumerate(outputs):
-            var_desc = test_outputs_info[i]
-            model.test_outputs[var_desc[0]] = out
+    with fluid.scope_guard(model_scope):
+        if status == "Normal" or \
+                status == "Prune" or status == "fluid.save":
+            startup_prog = fluid.Program()
+            model.test_prog = fluid.Program()
+            with fluid.program_guard(model.test_prog, startup_prog):
+                with fluid.unique_name.guard():
+                    model.test_inputs, model.test_outputs = model.build_net(
+                        mode='test')
+            model.test_prog = model.test_prog.clone(for_test=True)
+            model.exe.run(startup_prog)
+            if status == "Prune":
+                from .slim.prune import update_program
+                model.test_prog = update_program(model.test_prog, model_dir,
+                                                 model.places[0])
+            import pickle
+            with open(osp.join(model_dir, 'model.pdparams'), 'rb') as f:
+                load_dict = pickle.load(f)
+            fluid.io.set_program_state(model.test_prog, load_dict)
+
+        elif status == "Infer" or \
+                status == "Quant" or status == "fluid.save_inference_model":
+            [prog, input_names, outputs] = fluid.io.load_inference_model(
+                model_dir, model.exe, params_filename='__params__')
+            model.test_prog = prog
+            test_outputs_info = info['_ModelInputsOutputs']['test_outputs']
+            model.test_inputs = OrderedDict()
+            model.test_outputs = OrderedDict()
+            for name in input_names:
+                model.test_inputs[name] = model.test_prog.global_block().var(
+                    name)
+            for i, out in enumerate(outputs):
+                var_desc = test_outputs_info[i]
+                model.test_outputs[var_desc[0]] = out
     if 'Transforms' in info:
     if 'Transforms' in info:
         transforms_mode = info.get('TransformsMode', 'RGB')
         transforms_mode = info.get('TransformsMode', 'RGB')
         # 固定模型的输入shape
         # 固定模型的输入shape
@@ -107,6 +110,7 @@ def load_model(model_dir, fixed_input_shape=None):
                 model.__dict__[k] = v
                 model.__dict__[k] = v
 
 
     logging.info("Model[{}] loaded.".format(info['Model']))
     logging.info("Model[{}] loaded.".format(info['Model']))
+    model.scope = model_scope
     model.trainable = False
     model.trainable = False
     model.status = status
     model.status = status
     return model
     return model

+ 19 - 16
paddlex/cv/models/mask_rcnn.py

@@ -1,11 +1,11 @@
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
-# 
+#
 # Licensed under the Apache License, Version 2.0 (the "License");
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 # You may obtain a copy of the License at
-# 
+#
 #     http://www.apache.org/licenses/LICENSE-2.0
 #     http://www.apache.org/licenses/LICENSE-2.0
-# 
+#
 # Unless required by applicable law or agreed to in writing, software
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -286,10 +286,12 @@ class MaskRCNN(FasterRCNN):
                 'im_info': im_infos,
                 'im_info': im_infos,
                 'im_shape': im_shapes,
                 'im_shape': im_shapes,
             }
             }
-            outputs = self.exe.run(self.test_prog,
-                                   feed=[feed_data],
-                                   fetch_list=list(self.test_outputs.values()),
-                                   return_numpy=False)
+            with fluid.scope_guard(self.scope):
+                outputs = self.exe.run(
+                    self.test_prog,
+                    feed=[feed_data],
+                    fetch_list=list(self.test_outputs.values()),
+                    return_numpy=False)
             res = {
             res = {
                 'bbox': (np.array(outputs[0]),
                 'bbox': (np.array(outputs[0]),
                          outputs[0].recursive_sequence_lengths()),
                          outputs[0].recursive_sequence_lengths()),
@@ -356,15 +358,16 @@ class MaskRCNN(FasterRCNN):
         im = np.expand_dims(im, axis=0)
         im = np.expand_dims(im, axis=0)
         im_resize_info = np.expand_dims(im_resize_info, axis=0)
         im_resize_info = np.expand_dims(im_resize_info, axis=0)
         im_shape = np.expand_dims(im_shape, axis=0)
         im_shape = np.expand_dims(im_shape, axis=0)
-        outputs = self.exe.run(self.test_prog,
-                               feed={
-                                   'image': im,
-                                   'im_info': im_resize_info,
-                                   'im_shape': im_shape
-                               },
-                               fetch_list=list(self.test_outputs.values()),
-                               return_numpy=False,
-                               use_program_cache=True)
+        with fluid.scope_guard(self.scope):
+            outputs = self.exe.run(self.test_prog,
+                                   feed={
+                                       'image': im,
+                                       'im_info': im_resize_info,
+                                       'im_shape': im_shape
+                                   },
+                                   fetch_list=list(self.test_outputs.values()),
+                                   return_numpy=False,
+                                   use_program_cache=True)
         res = {
         res = {
             k: (np.array(v), v.recursive_sequence_lengths())
             k: (np.array(v), v.recursive_sequence_lengths())
             for k, v in zip(list(self.test_outputs.keys()), outputs)
             for k, v in zip(list(self.test_outputs.keys()), outputs)

+ 40 - 35
paddlex/cv/models/slim/post_quantization.py

@@ -85,13 +85,13 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
         self._support_quantize_op_type = \
         self._support_quantize_op_type = \
             list(set(QuantizationTransformPass._supported_quantizable_op_type +
             list(set(QuantizationTransformPass._supported_quantizable_op_type +
                 AddQuantDequantPass._supported_quantizable_op_type))
                 AddQuantDequantPass._supported_quantizable_op_type))
-        
+
         # Check inputs
         # Check inputs
         assert executor is not None, "The executor cannot be None."
         assert executor is not None, "The executor cannot be None."
         assert batch_size > 0, "The batch_size should be greater than 0."
         assert batch_size > 0, "The batch_size should be greater than 0."
         assert algo in self._support_algo_type, \
         assert algo in self._support_algo_type, \
             "The algo should be KL, abs_max or min_max."
             "The algo should be KL, abs_max or min_max."
-        
+
         self._executor = executor
         self._executor = executor
         self._dataset = dataset
         self._dataset = dataset
         self._batch_size = batch_size
         self._batch_size = batch_size
@@ -154,20 +154,19 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
         logging.info("Start to run batch!")
         logging.info("Start to run batch!")
         for data in self._data_loader():
         for data in self._data_loader():
             start = time.time()
             start = time.time()
-            self._executor.run(
-                program=self._program,
-                feed=data,
-                fetch_list=self._fetch_list,
-                return_numpy=False)
+            with fluid.scope_guard(self._scope):
+                self._executor.run(program=self._program,
+                                   feed=data,
+                                   fetch_list=self._fetch_list,
+                                   return_numpy=False)
             if self._algo == "KL":
             if self._algo == "KL":
                 self._sample_data(batch_id)
                 self._sample_data(batch_id)
             else:
             else:
                 self._sample_threshold()
                 self._sample_threshold()
             end = time.time()
             end = time.time()
-            logging.debug('[Run batch data] Batch={}/{}, time_each_batch={} s.'.format(
-                str(batch_id + 1),
-                str(batch_ct),
-                str(end-start)))
+            logging.debug(
+                '[Run batch data] Batch={}/{}, time_each_batch={} s.'.format(
+                    str(batch_id + 1), str(batch_ct), str(end - start)))
             batch_id += 1
             batch_id += 1
             if self._batch_nums and batch_id >= self._batch_nums:
             if self._batch_nums and batch_id >= self._batch_nums:
                 break
                 break
@@ -194,15 +193,16 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
         Returns:
         Returns:
             None
             None
         '''
         '''
-        feed_vars_names = [var.name for var in self._feed_list]
-        fluid.io.save_inference_model(
-            dirname=save_model_path,
-            feeded_var_names=feed_vars_names,
-            target_vars=self._fetch_list,
-            executor=self._executor,
-            params_filename='__params__',
-            main_program=self._program)
-        
+        with fluid.scope_guard(self._scope):
+            feed_vars_names = [var.name for var in self._feed_list]
+            fluid.io.save_inference_model(
+                dirname=save_model_path,
+                feeded_var_names=feed_vars_names,
+                target_vars=self._fetch_list,
+                executor=self._executor,
+                params_filename='__params__',
+                main_program=self._program)
+
     def _load_model_data(self):
     def _load_model_data(self):
         '''
         '''
         Set data loader.
         Set data loader.
@@ -212,7 +212,8 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
         self._data_loader = fluid.io.DataLoader.from_generator(
         self._data_loader = fluid.io.DataLoader.from_generator(
             feed_list=feed_vars, capacity=3 * self._batch_size, iterable=True)
             feed_list=feed_vars, capacity=3 * self._batch_size, iterable=True)
         self._data_loader.set_sample_list_generator(
         self._data_loader.set_sample_list_generator(
-            self._dataset.generator(self._batch_size, drop_last=True),
+            self._dataset.generator(
+                self._batch_size, drop_last=True),
             places=self._place)
             places=self._place)
 
 
     def _calculate_kl_threshold(self):
     def _calculate_kl_threshold(self):
@@ -235,10 +236,12 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
                     weight_threshold.append(abs_max_value)
                     weight_threshold.append(abs_max_value)
             self._quantized_var_kl_threshold[var_name] = weight_threshold
             self._quantized_var_kl_threshold[var_name] = weight_threshold
             end = time.time()
             end = time.time()
-            logging.debug('[Calculate weight] Weight_id={}/{}, time_each_weight={} s.'.format(
-                str(ct),
-                str(len(self._quantized_weight_var_name)),
-                str(end-start)))
+            logging.debug(
+                '[Calculate weight] Weight_id={}/{}, time_each_weight={} s.'.
+                format(
+                    str(ct),
+                    str(len(self._quantized_weight_var_name)), str(end -
+                                                                   start)))
             ct += 1
             ct += 1
 
 
         ct = 1
         ct = 1
@@ -257,10 +260,12 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
                 self._quantized_var_kl_threshold[var_name] = \
                 self._quantized_var_kl_threshold[var_name] = \
                     self._get_kl_scaling_factor(np.abs(sampling_data))
                     self._get_kl_scaling_factor(np.abs(sampling_data))
                 end = time.time()
                 end = time.time()
-                logging.debug('[Calculate activation] Activation_id={}/{}, time_each_activation={} s.'.format(
-                    str(ct),
-                    str(len(self._quantized_act_var_name)),
-                    str(end-start)))
+                logging.debug(
+                    '[Calculate activation] Activation_id={}/{}, time_each_activation={} s.'.
+                    format(
+                        str(ct),
+                        str(len(self._quantized_act_var_name)),
+                        str(end - start)))
                 ct += 1
                 ct += 1
         else:
         else:
             for var_name in self._quantized_act_var_name:
             for var_name in self._quantized_act_var_name:
@@ -270,10 +275,10 @@ class PaddleXPostTrainingQuantization(PostTrainingQuantization):
                 self._quantized_var_kl_threshold[var_name] = \
                 self._quantized_var_kl_threshold[var_name] = \
                     self._get_kl_scaling_factor(np.abs(self._sampling_data[var_name]))
                     self._get_kl_scaling_factor(np.abs(self._sampling_data[var_name]))
                 end = time.time()
                 end = time.time()
-                logging.debug('[Calculate activation] Activation_id={}/{}, time_each_activation={} s.'.format(
-                    str(ct),
-                    str(len(self._quantized_act_var_name)),
-                    str(end-start)))
+                logging.debug(
+                    '[Calculate activation] Activation_id={}/{}, time_each_activation={} s.'.
+                    format(
+                        str(ct),
+                        str(len(self._quantized_act_var_name)),
+                        str(end - start)))
                 ct += 1
                 ct += 1
-
-                

+ 16 - 13
paddlex/cv/models/yolo_v3.py

@@ -1,11 +1,11 @@
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
 # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
-# 
+#
 # Licensed under the Apache License, Version 2.0 (the "License");
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 # You may obtain a copy of the License at
-# 
+#
 #     http://www.apache.org/licenses/LICENSE-2.0
 #     http://www.apache.org/licenses/LICENSE-2.0
-# 
+#
 # Unless required by applicable law or agreed to in writing, software
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -313,10 +313,12 @@ class YOLOv3(BaseAPI):
             images = np.array([d[0] for d in data])
             images = np.array([d[0] for d in data])
             im_sizes = np.array([d[1] for d in data])
             im_sizes = np.array([d[1] for d in data])
             feed_data = {'image': images, 'im_size': im_sizes}
             feed_data = {'image': images, 'im_size': im_sizes}
-            outputs = self.exe.run(self.test_prog,
-                                   feed=[feed_data],
-                                   fetch_list=list(self.test_outputs.values()),
-                                   return_numpy=False)
+            with fluid.scope_guard(self.scope):
+                outputs = self.exe.run(
+                    self.test_prog,
+                    feed=[feed_data],
+                    fetch_list=list(self.test_outputs.values()),
+                    return_numpy=False)
             res = {
             res = {
                 'bbox': (np.array(outputs[0]),
                 'bbox': (np.array(outputs[0]),
                          outputs[0].recursive_sequence_lengths())
                          outputs[0].recursive_sequence_lengths())
@@ -366,12 +368,13 @@ class YOLOv3(BaseAPI):
             im, im_size = self.test_transforms(img_file)
             im, im_size = self.test_transforms(img_file)
         im = np.expand_dims(im, axis=0)
         im = np.expand_dims(im, axis=0)
         im_size = np.expand_dims(im_size, axis=0)
         im_size = np.expand_dims(im_size, axis=0)
-        outputs = self.exe.run(self.test_prog,
-                               feed={'image': im,
-                                     'im_size': im_size},
-                               fetch_list=list(self.test_outputs.values()),
-                               return_numpy=False,
-                               use_program_cache=True)
+        with fluid.scope_guard(self.scope):
+            outputs = self.exe.run(self.test_prog,
+                                   feed={'image': im,
+                                         'im_size': im_size},
+                                   fetch_list=list(self.test_outputs.values()),
+                                   return_numpy=False,
+                                   use_program_cache=True)
         res = {
         res = {
             k: (np.array(v), v.recursive_sequence_lengths())
             k: (np.array(v), v.recursive_sequence_lengths())
             for k, v in zip(list(self.test_outputs.keys()), outputs)
             for k, v in zip(list(self.test_outputs.keys()), outputs)

+ 5 - 5
paddlex/cv/transforms/visualize.py

@@ -73,7 +73,7 @@ def cls_compose(im, label=None, transforms=None, vdl_writer=None, step=0):
                 raise TypeError('Can\'t read The image file {}!'.format(im))
                 raise TypeError('Can\'t read The image file {}!'.format(im))
         im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
         im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
         if vdl_writer is not None:
         if vdl_writer is not None:
-            vdl_writer.add_image(tag='0. OriginalImange/' +  str(step),
+            vdl_writer.add_image(tag='0. OriginalImage/' +  str(step),
                                  img=im,
                                  img=im,
                                  step=0)
                                  step=0)
         op_id = 1
         op_id = 1
@@ -148,7 +148,7 @@ def det_compose(im, im_info=None, label_info=None, transforms=None, vdl_writer=N
         if len(outputs) == 3:
         if len(outputs) == 3:
             label_info = outputs[2]
             label_info = outputs[2]
         if vdl_writer is not None:
         if vdl_writer is not None:
-            vdl_writer.add_image(tag='0. OriginalImange/' +  str(step),
+            vdl_writer.add_image(tag='0. OriginalImage/' +  str(step),
                                  img=im,
                                  img=im,
                                  step=0)
                                  step=0)
         op_id = 1
         op_id = 1
@@ -209,7 +209,7 @@ def det_compose(im, im_info=None, label_info=None, transforms=None, vdl_writer=N
             if vdl_writer is not None:
             if vdl_writer is not None:
                 tag = str(op_id) + '. ' + op.__class__.__name__ + '/' +  str(step)
                 tag = str(op_id) + '. ' + op.__class__.__name__ + '/' +  str(step)
                 if op is None:
                 if op is None:
-                    tag = str(op_id) + '. OriginalImangeWithGTBox/' +  str(step)
+                    tag = str(op_id) + '. OriginalImageWithGTBox/' +  str(step)
                 vdl_writer.add_image(tag=tag,
                 vdl_writer.add_image(tag=tag,
                                      img=vdl_im,
                                      img=vdl_im,
                                      step=0)
                                      step=0)
@@ -233,7 +233,7 @@ def seg_compose(im, im_info=None, label=None, transforms=None, vdl_writer=None,
         if not isinstance(label, np.ndarray):
         if not isinstance(label, np.ndarray):
             label = np.asarray(Image.open(label))
             label = np.asarray(Image.open(label))
     if vdl_writer is not None:
     if vdl_writer is not None:
-        vdl_writer.add_image(tag='0. OriginalImange' + '/' +  str(step),
+        vdl_writer.add_image(tag='0. OriginalImage' + '/' +  str(step),
                              img=im,
                              img=im,
                              step=0)
                              step=0)
     op_id = 1
     op_id = 1
@@ -303,4 +303,4 @@ def visualize(dataset, img_count=3, save_dir='vdl_output'):
             seg_compose(*data)
             seg_compose(*data)
         else:
         else:
             raise Exception('The transform must the subclass of \
             raise Exception('The transform must the subclass of \
-                    ClsTransform or DetTransform or SegTransform!')
+                    ClsTransform or DetTransform or SegTransform!')

+ 1 - 1
paddlex/tools/x2coco.py

@@ -100,7 +100,7 @@ class LabelMe2COCO(X2COCO):
         image["height"] = json_info["imageHeight"]
         image["height"] = json_info["imageHeight"]
         image["width"] = json_info["imageWidth"]
         image["width"] = json_info["imageWidth"]
         image["id"] = image_id + 1
         image["id"] = image_id + 1
-        image["file_name"] = json_info["imagePath"].split("/")[-1]
+        image["file_name"] = osp.split(json_info["imagePath"])[-1]
         return image
         return image
     
     
     def generate_polygon_anns_field(self, height, width, 
     def generate_polygon_anns_field(self, height, width,