|
@@ -0,0 +1,812 @@
|
|
|
|
|
+简体中文 | [English](human_keypoint_detection.en.md)
|
|
|
|
|
+
|
|
|
|
|
+# 人体关键点检测产线使用教程
|
|
|
|
|
+
|
|
|
|
|
+## 1. 人体关键点检测产线介绍
|
|
|
|
|
+
|
|
|
|
|
+人体关键点检测旨在通过识别和定位人体的特定关节和部位,来实现对人体姿态和动作的分析。该任务不仅需要在图像中检测出人体,还需要精确获取人体的关键点位置,如肩膀、肘部、膝盖等,从而进行姿态估计和行为识别。人体关键点检测广泛应用于运动分析、健康监测、动画制作和人机交互等场景。
|
|
|
|
|
+
|
|
|
|
|
+PaddleX 的人体关键点检测产线是一个 Top-Down 方案,由行人检测和关键点检测两个模块组成,针对移动端设备优化,可精确流畅地在移动端设备上执行多人姿态估计任务。
|
|
|
|
|
+
|
|
|
|
|
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/pipelines/human_keypoint_detection/01.jpg">
|
|
|
|
|
+
|
|
|
|
|
+<b>人体关键点检测产线中包含了行人检测模块和关键点检测模块</b>,有若干模型可供选择,您可以根据下边的 benchmark 数据来选择使用的模型。<b>如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型推理速度,请选择推理速度较快的模型,如您更考虑模型存储大小,请选择存储大小较小的模型</b>。
|
|
|
|
|
+
|
|
|
|
|
+<summary> 👉模型列表详情</summary>
|
|
|
|
|
+
|
|
|
|
|
+<b>行人检测模块:</b>
|
|
|
|
|
+
|
|
|
|
|
+<table>
|
|
|
|
|
+ <tr>
|
|
|
|
|
+ <th >模型</th>
|
|
|
|
|
+ <th >mAP(0.5:0.95)</th>
|
|
|
|
|
+ <th >mAP(0.5)</th>
|
|
|
|
|
+ <th >GPU推理耗时(ms)</th>
|
|
|
|
|
+ <th >CPU推理耗时 (ms)</th>
|
|
|
|
|
+ <th >模型存储大小(M)</th>
|
|
|
|
|
+ <th >介绍</th>
|
|
|
|
|
+ </tr>
|
|
|
|
|
+ <tr>
|
|
|
|
|
+ <td>PP-YOLOE-L_human</td>
|
|
|
|
|
+ <td>48.0</td>
|
|
|
|
|
+ <td>81.9</td>
|
|
|
|
|
+ <td>32.8</td>
|
|
|
|
|
+ <td>777.7</td>
|
|
|
|
|
+ <td>196.02</td>
|
|
|
|
|
+ <td rowspan="2">基于PP-YOLOE的行人检测模型</td>
|
|
|
|
|
+ </tr>
|
|
|
|
|
+ <tr>
|
|
|
|
|
+ <td>PP-YOLOE-S_human</td>
|
|
|
|
|
+ <td>42.5</td>
|
|
|
|
|
+ <td>77.9</td>
|
|
|
|
|
+ <td>15.0</td>
|
|
|
|
|
+ <td>179.3</td>
|
|
|
|
|
+ <td>28.79</td>
|
|
|
|
|
+ </tr>
|
|
|
|
|
+</table>
|
|
|
|
|
+
|
|
|
|
|
+<b>注:以上精度指标为CrowdHuman数据集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。</b>
|
|
|
|
|
+
|
|
|
|
|
+<b>关键点检测模块:</b>
|
|
|
|
|
+
|
|
|
|
|
+<table>
|
|
|
|
|
+ <tr>
|
|
|
|
|
+ <th >模型</th>
|
|
|
|
|
+ <th >方案</th>
|
|
|
|
|
+ <th >输入尺寸</th>
|
|
|
|
|
+ <th >AP(0.5:0.95)</th>
|
|
|
|
|
+ <th >GPU推理耗时(ms)</th>
|
|
|
|
|
+ <th >CPU推理耗时 (ms)</th>
|
|
|
|
|
+ <th >模型存储大小(M)</th>
|
|
|
|
|
+ <th >介绍</th>
|
|
|
|
|
+ </tr>
|
|
|
|
|
+ <tr>
|
|
|
|
|
+ <td>PP-TinyPose_128x96</td>
|
|
|
|
|
+ <td>Top-Down</td>
|
|
|
|
|
+ <td>128*96</td>
|
|
|
|
|
+ <td>58.4</td>
|
|
|
|
|
+ <td></td>
|
|
|
|
|
+ <td></td>
|
|
|
|
|
+ <td>4.9</td>
|
|
|
|
|
+ <td rowspan="2">PP-TinyPose 是百度飞桨视觉团队自研的针对移动端设备优化的实时关键点检测模型,可流畅地在移动端设备上执行多人姿态估计任务</td>
|
|
|
|
|
+ </tr>
|
|
|
|
|
+ <tr>
|
|
|
|
|
+ <td>PP-TinyPose_256x192</td>
|
|
|
|
|
+ <td>Top-Down</td>
|
|
|
|
|
+ <td>256*192</td>
|
|
|
|
|
+ <td>68.3</td>
|
|
|
|
|
+ <td></td>
|
|
|
|
|
+ <td></td>
|
|
|
|
|
+ <td>4.9</td>
|
|
|
|
|
+ </tr>
|
|
|
|
|
+</table>
|
|
|
|
|
+
|
|
|
|
|
+<b>注:以上精度指标为COCO数据集 AP(0.5:0.95),所依赖的检测框为ground truth标注得到。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。</b>
|
|
|
|
|
+
|
|
|
|
|
+## 2. 快速开始
|
|
|
|
|
+
|
|
|
|
|
+PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可以在本地使用 Python 体验通用图像识别产线的效果。
|
|
|
|
|
+
|
|
|
|
|
+### 2.1 在线体验
|
|
|
|
|
+
|
|
|
|
|
+暂不支持在线体验。
|
|
|
|
|
+
|
|
|
|
|
+### 2.2 本地体验
|
|
|
|
|
+
|
|
|
|
|
+> ❗ 在本地使用人体关键点检测产线前,请确保您已经按照[PaddleX安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。
|
|
|
|
|
+
|
|
|
|
|
+#### 2.2.1 命令行方式体验
|
|
|
|
|
+
|
|
|
|
|
+一行命令即可快速体验人体关键点检测产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/keypoint_detection_001.jpg),并将 `--input` 替换为本地路径,进行预测
|
|
|
|
|
+
|
|
|
|
|
+```bash
|
|
|
|
|
+paddlex --pipeline object_detection --input keypoint_detection_001.jpg --device gpu:0
|
|
|
|
|
+```
|
|
|
|
|
+参数说明:
|
|
|
|
|
+
|
|
|
|
|
+```
|
|
|
|
|
+--pipeline:产线名称,此处为人体关键点检测产线
|
|
|
|
|
+--input:待处理的输入图片的本地路径或URL
|
|
|
|
|
+--device 使用的GPU序号(例如gpu:0表示使用第0块GPU,gpu:1,2表示使用第1、2块GPU),也可选择使用CPU(--device cpu)
|
|
|
|
|
+```
|
|
|
|
|
+
|
|
|
|
|
+在执行上述命令时,加载的是默认的人体关键点检测产线配置文件,若您需要自定义配置文件,可执行如下命令获取:
|
|
|
|
|
+
|
|
|
|
|
+<details><summary> 👉点击展开</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-bash">paddlex --get_pipeline_config human_keypoint_detection
|
|
|
|
|
+</code></pre>
|
|
|
|
|
+<p>执行后,人体关键点检测产线配置文件将被保存在当前路径。若您希望自定义保存位置,可执行如下命令(假设自定义保存位置为<code>./my_path</code>):</p>
|
|
|
|
|
+<pre><code class="language-bash">paddlex --get_pipeline_config human_keypoint_detection --save_path ./my_path
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+#### 2.2.2 Python脚本方式集成
|
|
|
|
|
+几行代码即可完成人体关键点检测产线的快速推理。
|
|
|
|
|
+
|
|
|
|
|
+```python
|
|
|
|
|
+from paddlex import create_pipeline
|
|
|
|
|
+
|
|
|
|
|
+pipeline = create_pipeline(pipeline="human_keypoint_detection")
|
|
|
|
|
+
|
|
|
|
|
+output = pipeline.predict("keypoint_detection_001.jpg")
|
|
|
|
|
+for res in output:
|
|
|
|
|
+ res.print()
|
|
|
|
|
+ res.save_to_img("./output/")
|
|
|
|
|
+```
|
|
|
|
|
+
|
|
|
|
|
+在上述 Python 脚本中,执行了如下几个步骤:
|
|
|
|
|
+
|
|
|
|
|
+(1)实例化 `create_pipeline` 实例化产线对象:具体参数说明如下:
|
|
|
|
|
+
|
|
|
|
|
+|参数|参数说明|参数类型|默认值|
|
|
|
|
|
+|-|-|-|-|
|
|
|
|
|
+|`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
|
|
|
|
|
+|`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
|
|
|
|
|
+|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
|
|
|
|
|
+
|
|
|
|
|
+(2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
|
|
|
|
|
+
|
|
|
|
|
+| 参数类型 | 参数说明 |
|
|
|
|
|
+|---------------|-----------------------------------------------------------------------------------------------------------|
|
|
|
|
|
+| Python Var | 支持直接传入Python变量,如numpy.ndarray表示的图像数据。 |
|
|
|
|
|
+| str | 支持传入待预测数据文件路径,如图像文件的本地路径:`/root/data/img.jpg`。 |
|
|
|
|
|
+| str | 支持传入待预测数据文件URL,如图像文件的网络URL:[示例](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png)。|
|
|
|
|
|
+| str | 支持传入本地目录,该目录下需包含待预测数据文件,如本地路径:`/root/data/`。 |
|
|
|
|
|
+| dict | 支持传入字典类型,字典的key需与具体任务对应,如图像分类任务对应\"img\",字典的val支持上述类型数据,例如:`{\"img\": \"/root/data1\"}`。|
|
|
|
|
|
+| list | 支持传入列表,列表元素需为上述类型数据,如`[numpy.ndarray, numpy.ndarray],[\"/root/data/img1.jpg\", \"/root/data/img2.jpg\"]`,`[\"/root/data1\", \"/root/data2\"]`,`[{\"img\": \"/root/data1\"}, {\"img\": \"/root/data2/img.jpg\"}]`。|
|
|
|
|
|
+
|
|
|
|
|
+(3)调用`predict`方法获取预测结果:`predict` 方法为`generator`,因此需要通过调用获得预测结果,`predict`方法以batch为单位对数据进行预测,因此预测结果为list形式表示的一组预测结果。
|
|
|
|
|
+
|
|
|
|
|
+(4)对预测结果进行处理:每个样本的预测结果均为`dict`类型,且支持打印,或保存为文件,支持保存的类型与具体产线相关,如:
|
|
|
|
|
+
|
|
|
|
|
+| 方法 | 说明 | 方法参数 |
|
|
|
|
|
+|--------------|-----------------------------|--------------------------------------------------------------------------------------------------------|
|
|
|
|
|
+| print | 打印结果到终端 | `- format_json`:bool类型,是否对输出内容进行使用json缩进格式化,默认为True;<br>`- indent`:int类型,json格式化设置,仅当format_json为True时有效,默认为4;<br>`- ensure_ascii`:bool类型,json格式化设置,仅当format_json为True时有效,默认为False; |
|
|
|
|
|
+| save_to_json | 将结果保存为json格式的文件 | `- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;<br>`- indent`:int类型,json格式化设置,默认为4;<br>`- ensure_ascii`:bool类型,json格式化设置,默认为False; |
|
|
|
|
|
+| save_to_img | 将结果保存为图像格式的文件 | `- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致; |
|
|
|
|
|
+
|
|
|
|
|
+若您获取了配置文件,即可对目标检测产线各项配置进行自定义,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可。
|
|
|
|
|
+
|
|
|
|
|
+例如,若您的配置文件保存在 `./my_path/human_keypoint_detection.yaml` ,则只需执行:
|
|
|
|
|
+
|
|
|
|
|
+```python
|
|
|
|
|
+from paddlex import create_pipeline
|
|
|
|
|
+pipeline = create_pipeline(pipeline="./my_path/human_keypoint_detection.yaml")
|
|
|
|
|
+output = pipeline.predict("keypoint_detection_001.jpg")
|
|
|
|
|
+for res in output:
|
|
|
|
|
+ res.print() ## 打印预测的结构化输出
|
|
|
|
|
+ res.save_to_img("./output/") ## 保存结果可视化图像
|
|
|
|
|
+ res.save_to_json("./output/") ## 保存预测的结构化输出
|
|
|
|
|
+```
|
|
|
|
|
+
|
|
|
|
|
+## 3. 开发集成/部署
|
|
|
|
|
+
|
|
|
|
|
+如果人体关键点检测产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
|
|
|
|
|
+
|
|
|
|
|
+若您需要将通用图像识别产线直接应用在您的Python项目中,可以参考 [2.2.2 Python脚本方式](#222-python脚本方式集成)中的示例代码。
|
|
|
|
|
+
|
|
|
|
|
+此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
|
|
|
|
|
+
|
|
|
|
|
+🚀 <b>高性能推理</b>:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_inference.md)。
|
|
|
|
|
+
|
|
|
|
|
+☁️ <b>服务化部署</b>:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
|
|
|
|
|
+
|
|
|
|
|
+下面是API参考和多语言服务调用示例:
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>API参考</summary>
|
|
|
|
|
+
|
|
|
|
|
+<p>对于服务提供的所有操作:</p>
|
|
|
|
|
+<ul>
|
|
|
|
|
+<li>响应体以及POST请求的请求体均为JSON数据(JSON对象)。</li>
|
|
|
|
|
+<li>当请求处理成功时,响应状态码为<code>200</code>,响应体的属性如下:</li>
|
|
|
|
|
+</ul>
|
|
|
|
|
+<table>
|
|
|
|
|
+<thead>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<th>名称</th>
|
|
|
|
|
+<th>类型</th>
|
|
|
|
|
+<th>含义</th>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</thead>
|
|
|
|
|
+<tbody>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>errorCode</code></td>
|
|
|
|
|
+<td><code>integer</code></td>
|
|
|
|
|
+<td>错误码。固定为<code>0</code>。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>errorMsg</code></td>
|
|
|
|
|
+<td><code>string</code></td>
|
|
|
|
|
+<td>错误说明。固定为<code>"Success"</code>。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</tbody>
|
|
|
|
|
+</table>
|
|
|
|
|
+<p>响应体还可能有<code>result</code>属性,类型为<code>object</code>,其中存储操作结果信息。</p>
|
|
|
|
|
+<ul>
|
|
|
|
|
+<li>当请求处理未成功时,响应体的属性如下:</li>
|
|
|
|
|
+</ul>
|
|
|
|
|
+<table>
|
|
|
|
|
+<thead>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<th>名称</th>
|
|
|
|
|
+<th>类型</th>
|
|
|
|
|
+<th>含义</th>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</thead>
|
|
|
|
|
+<tbody>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>errorCode</code></td>
|
|
|
|
|
+<td><code>integer</code></td>
|
|
|
|
|
+<td>错误码。与响应状态码相同。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>errorMsg</code></td>
|
|
|
|
|
+<td><code>string</code></td>
|
|
|
|
|
+<td>错误说明。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</tbody>
|
|
|
|
|
+</table>
|
|
|
|
|
+<p>服务提供的操作如下:</p>
|
|
|
|
|
+<ul>
|
|
|
|
|
+<li><b><code>infer</code></b></li>
|
|
|
|
|
+</ul>
|
|
|
|
|
+<p>获取图像OCR结果。</p>
|
|
|
|
|
+<p><code>POST /ocr</code></p>
|
|
|
|
|
+<ul>
|
|
|
|
|
+<li>请求体的属性如下:</li>
|
|
|
|
|
+</ul>
|
|
|
|
|
+<table>
|
|
|
|
|
+<thead>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<th>名称</th>
|
|
|
|
|
+<th>类型</th>
|
|
|
|
|
+<th>含义</th>
|
|
|
|
|
+<th>是否必填</th>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</thead>
|
|
|
|
|
+<tbody>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>image</code></td>
|
|
|
|
|
+<td><code>string</code></td>
|
|
|
|
|
+<td>服务可访问的图像文件的URL或图像文件内容的Base64编码结果。</td>
|
|
|
|
|
+<td>是</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>inferenceParams</code></td>
|
|
|
|
|
+<td><code>object</code></td>
|
|
|
|
|
+<td>推理参数。</td>
|
|
|
|
|
+<td>否</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</tbody>
|
|
|
|
|
+</table>
|
|
|
|
|
+<p><code>inferenceParams</code>的属性如下:</p>
|
|
|
|
|
+<table>
|
|
|
|
|
+<thead>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<th>名称</th>
|
|
|
|
|
+<th>类型</th>
|
|
|
|
|
+<th>含义</th>
|
|
|
|
|
+<th>是否必填</th>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</thead>
|
|
|
|
|
+<tbody>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>maxLongSide</code></td>
|
|
|
|
|
+<td><code>integer</code></td>
|
|
|
|
|
+<td>推理时,若文本检测模型的输入图像较长边的长度大于<code>maxLongSide</code>,则将对图像进行缩放,使其较长边的长度等于<code>maxLongSide</code>。</td>
|
|
|
|
|
+<td>否</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</tbody>
|
|
|
|
|
+</table>
|
|
|
|
|
+<ul>
|
|
|
|
|
+<li>请求处理成功时,响应体的<code>result</code>具有如下属性:</li>
|
|
|
|
|
+</ul>
|
|
|
|
|
+<table>
|
|
|
|
|
+<thead>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<th>名称</th>
|
|
|
|
|
+<th>类型</th>
|
|
|
|
|
+<th>含义</th>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</thead>
|
|
|
|
|
+<tbody>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>texts</code></td>
|
|
|
|
|
+<td><code>array</code></td>
|
|
|
|
|
+<td>文本位置、内容和得分。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>image</code></td>
|
|
|
|
|
+<td><code>string</code></td>
|
|
|
|
|
+<td>OCR结果图,其中标注检测到的文本位置。图像为JPEG格式,使用Base64编码。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</tbody>
|
|
|
|
|
+</table>
|
|
|
|
|
+<p><code>texts</code>中的每个元素为一个<code>object</code>,具有如下属性:</p>
|
|
|
|
|
+<table>
|
|
|
|
|
+<thead>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<th>名称</th>
|
|
|
|
|
+<th>类型</th>
|
|
|
|
|
+<th>含义</th>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</thead>
|
|
|
|
|
+<tbody>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>poly</code></td>
|
|
|
|
|
+<td><code>array</code></td>
|
|
|
|
|
+<td>文本位置。数组中元素依次为包围文本的多边形的顶点坐标。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>text</code></td>
|
|
|
|
|
+<td><code>string</code></td>
|
|
|
|
|
+<td>文本内容。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+<tr>
|
|
|
|
|
+<td><code>score</code></td>
|
|
|
|
|
+<td><code>number</code></td>
|
|
|
|
|
+<td>文本识别得分。</td>
|
|
|
|
|
+</tr>
|
|
|
|
|
+</tbody>
|
|
|
|
|
+</table>
|
|
|
|
|
+<p><code>result</code>示例如下:</p>
|
|
|
|
|
+<pre><code class="language-json">{
|
|
|
|
|
+"texts": [
|
|
|
|
|
+{
|
|
|
|
|
+"poly": [
|
|
|
|
|
+[
|
|
|
|
|
+444,
|
|
|
|
|
+244
|
|
|
|
|
+],
|
|
|
|
|
+[
|
|
|
|
|
+705,
|
|
|
|
|
+244
|
|
|
|
|
+],
|
|
|
|
|
+[
|
|
|
|
|
+705,
|
|
|
|
|
+311
|
|
|
|
|
+],
|
|
|
|
|
+[
|
|
|
|
|
+444,
|
|
|
|
|
+311
|
|
|
|
|
+]
|
|
|
|
|
+],
|
|
|
|
|
+"text": "北京南站",
|
|
|
|
|
+"score": 0.9
|
|
|
|
|
+},
|
|
|
|
|
+{
|
|
|
|
|
+"poly": [
|
|
|
|
|
+[
|
|
|
|
|
+992,
|
|
|
|
|
+248
|
|
|
|
|
+],
|
|
|
|
|
+[
|
|
|
|
|
+1263,
|
|
|
|
|
+251
|
|
|
|
|
+],
|
|
|
|
|
+[
|
|
|
|
|
+1263,
|
|
|
|
|
+318
|
|
|
|
|
+],
|
|
|
|
|
+[
|
|
|
|
|
+992,
|
|
|
|
|
+315
|
|
|
|
|
+]
|
|
|
|
|
+],
|
|
|
|
|
+"text": "天津站",
|
|
|
|
|
+"score": 0.5
|
|
|
|
|
+}
|
|
|
|
|
+],
|
|
|
|
|
+"image": "xxxxxx"
|
|
|
|
|
+}
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>多语言调用服务示例</summary>
|
|
|
|
|
+
|
|
|
|
|
+<details>
|
|
|
|
|
+<summary>Python</summary>
|
|
|
|
|
+
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-python">import base64
|
|
|
|
|
+import requests
|
|
|
|
|
+
|
|
|
|
|
+API_URL = "http://localhost:8080/ocr" # 服务URL
|
|
|
|
|
+image_path = "./demo.jpg"
|
|
|
|
|
+output_image_path = "./out.jpg"
|
|
|
|
|
+
|
|
|
|
|
+# 对本地图像进行Base64编码
|
|
|
|
|
+with open(image_path, "rb") as file:
|
|
|
|
|
+ image_bytes = file.read()
|
|
|
|
|
+ image_data = base64.b64encode(image_bytes).decode("ascii")
|
|
|
|
|
+
|
|
|
|
|
+payload = {"image": image_data} # Base64编码的文件内容或者图像URL
|
|
|
|
|
+
|
|
|
|
|
+# 调用API
|
|
|
|
|
+response = requests.post(API_URL, json=payload)
|
|
|
|
|
+
|
|
|
|
|
+# 处理接口返回数据
|
|
|
|
|
+assert response.status_code == 200
|
|
|
|
|
+result = response.json()["result"]
|
|
|
|
|
+with open(output_image_path, "wb") as file:
|
|
|
|
|
+ file.write(base64.b64decode(result["image"]))
|
|
|
|
|
+print(f"Output image saved at {output_image_path}")
|
|
|
|
|
+print("\nDetected texts:")
|
|
|
|
|
+print(result["texts"])
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>C++</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-cpp">#include <iostream>
|
|
|
|
|
+#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib
|
|
|
|
|
+#include "nlohmann/json.hpp" // https://github.com/nlohmann/json
|
|
|
|
|
+#include "base64.hpp" // https://github.com/tobiaslocker/base64
|
|
|
|
|
+
|
|
|
|
|
+int main() {
|
|
|
|
|
+ httplib::Client client("localhost:8080");
|
|
|
|
|
+ const std::string imagePath = "./demo.jpg";
|
|
|
|
|
+ const std::string outputImagePath = "./out.jpg";
|
|
|
|
|
+
|
|
|
|
|
+ httplib::Headers headers = {
|
|
|
|
|
+ {"Content-Type", "application/json"}
|
|
|
|
|
+ };
|
|
|
|
|
+
|
|
|
|
|
+ // 对本地图像进行Base64编码
|
|
|
|
|
+ std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
|
|
|
|
|
+ std::streamsize size = file.tellg();
|
|
|
|
|
+ file.seekg(0, std::ios::beg);
|
|
|
|
|
+
|
|
|
|
|
+ std::vector<char> buffer(size);
|
|
|
|
|
+ if (!file.read(buffer.data(), size)) {
|
|
|
|
|
+ std::cerr << "Error reading file." << std::endl;
|
|
|
|
|
+ return 1;
|
|
|
|
|
+ }
|
|
|
|
|
+ std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
|
|
|
|
|
+ std::string encodedImage = base64::to_base64(bufferStr);
|
|
|
|
|
+
|
|
|
|
|
+ nlohmann::json jsonObj;
|
|
|
|
|
+ jsonObj["image"] = encodedImage;
|
|
|
|
|
+ std::string body = jsonObj.dump();
|
|
|
|
|
+
|
|
|
|
|
+ // 调用API
|
|
|
|
|
+ auto response = client.Post("/ocr", headers, body, "application/json");
|
|
|
|
|
+ // 处理接口返回数据
|
|
|
|
|
+ if (response && response->status == 200) {
|
|
|
|
|
+ nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
|
|
|
|
|
+ auto result = jsonResponse["result"];
|
|
|
|
|
+
|
|
|
|
|
+ encodedImage = result["image"];
|
|
|
|
|
+ std::string decodedString = base64::from_base64(encodedImage);
|
|
|
|
|
+ std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end());
|
|
|
|
|
+ std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
|
|
|
|
|
+ if (outputImage.is_open()) {
|
|
|
|
|
+ outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size());
|
|
|
|
|
+ outputImage.close();
|
|
|
|
|
+ std::cout << "Output image saved at " << outPutImagePath << std::endl;
|
|
|
|
|
+ } else {
|
|
|
|
|
+ std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl;
|
|
|
|
|
+ }
|
|
|
|
|
+
|
|
|
|
|
+ auto texts = result["texts"];
|
|
|
|
|
+ std::cout << "\nDetected texts:" << std::endl;
|
|
|
|
|
+ for (const auto& text : texts) {
|
|
|
|
|
+ std::cout << text << std::endl;
|
|
|
|
|
+ }
|
|
|
|
|
+ } else {
|
|
|
|
|
+ std::cout << "Failed to send HTTP request." << std::endl;
|
|
|
|
|
+ return 1;
|
|
|
|
|
+ }
|
|
|
|
|
+
|
|
|
|
|
+ return 0;
|
|
|
|
|
+}
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>Java</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-java">import okhttp3.*;
|
|
|
|
|
+import com.fasterxml.jackson.databind.ObjectMapper;
|
|
|
|
|
+import com.fasterxml.jackson.databind.JsonNode;
|
|
|
|
|
+import com.fasterxml.jackson.databind.node.ObjectNode;
|
|
|
|
|
+
|
|
|
|
|
+import java.io.File;
|
|
|
|
|
+import java.io.FileOutputStream;
|
|
|
|
|
+import java.io.IOException;
|
|
|
|
|
+import java.util.Base64;
|
|
|
|
|
+
|
|
|
|
|
+public class Main {
|
|
|
|
|
+ public static void main(String[] args) throws IOException {
|
|
|
|
|
+ String API_URL = "http://localhost:8080/ocr"; // 服务URL
|
|
|
|
|
+ String imagePath = "./demo.jpg"; // 本地图像
|
|
|
|
|
+ String outputImagePath = "./out.jpg"; // 输出图像
|
|
|
|
|
+
|
|
|
|
|
+ // 对本地图像进行Base64编码
|
|
|
|
|
+ File file = new File(imagePath);
|
|
|
|
|
+ byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
|
|
|
|
|
+ String imageData = Base64.getEncoder().encodeToString(fileContent);
|
|
|
|
|
+
|
|
|
|
|
+ ObjectMapper objectMapper = new ObjectMapper();
|
|
|
|
|
+ ObjectNode params = objectMapper.createObjectNode();
|
|
|
|
|
+ params.put("image", imageData); // Base64编码的文件内容或者图像URL
|
|
|
|
|
+
|
|
|
|
|
+ // 创建 OkHttpClient 实例
|
|
|
|
|
+ OkHttpClient client = new OkHttpClient();
|
|
|
|
|
+ MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
|
|
|
|
|
+ RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
|
|
|
|
|
+ Request request = new Request.Builder()
|
|
|
|
|
+ .url(API_URL)
|
|
|
|
|
+ .post(body)
|
|
|
|
|
+ .build();
|
|
|
|
|
+
|
|
|
|
|
+ // 调用API并处理接口返回数据
|
|
|
|
|
+ try (Response response = client.newCall(request).execute()) {
|
|
|
|
|
+ if (response.isSuccessful()) {
|
|
|
|
|
+ String responseBody = response.body().string();
|
|
|
|
|
+ JsonNode resultNode = objectMapper.readTree(responseBody);
|
|
|
|
|
+ JsonNode result = resultNode.get("result");
|
|
|
|
|
+ String base64Image = result.get("image").asText();
|
|
|
|
|
+ JsonNode texts = result.get("texts");
|
|
|
|
|
+
|
|
|
|
|
+ byte[] imageBytes = Base64.getDecoder().decode(base64Image);
|
|
|
|
|
+ try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
|
|
|
|
|
+ fos.write(imageBytes);
|
|
|
|
|
+ }
|
|
|
|
|
+ System.out.println("Output image saved at " + outputImagePath);
|
|
|
|
|
+ System.out.println("\nDetected texts: " + texts.toString());
|
|
|
|
|
+ } else {
|
|
|
|
|
+ System.err.println("Request failed with code: " + response.code());
|
|
|
|
|
+ }
|
|
|
|
|
+ }
|
|
|
|
|
+ }
|
|
|
|
|
+}
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>Go</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-go">package main
|
|
|
|
|
+
|
|
|
|
|
+import (
|
|
|
|
|
+ "bytes"
|
|
|
|
|
+ "encoding/base64"
|
|
|
|
|
+ "encoding/json"
|
|
|
|
|
+ "fmt"
|
|
|
|
|
+ "io/ioutil"
|
|
|
|
|
+ "net/http"
|
|
|
|
|
+)
|
|
|
|
|
+
|
|
|
|
|
+func main() {
|
|
|
|
|
+ API_URL := "http://localhost:8080/ocr"
|
|
|
|
|
+ imagePath := "./demo.jpg"
|
|
|
|
|
+ outputImagePath := "./out.jpg"
|
|
|
|
|
+
|
|
|
|
|
+ // 对本地图像进行Base64编码
|
|
|
|
|
+ imageBytes, err := ioutil.ReadFile(imagePath)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error reading image file:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+ imageData := base64.StdEncoding.EncodeToString(imageBytes)
|
|
|
|
|
+
|
|
|
|
|
+ payload := map[string]string{"image": imageData} // Base64编码的文件内容或者图像URL
|
|
|
|
|
+ payloadBytes, err := json.Marshal(payload)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error marshaling payload:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+
|
|
|
|
|
+ // 调用API
|
|
|
|
|
+ client := &http.Client{}
|
|
|
|
|
+ req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error creating request:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+
|
|
|
|
|
+ res, err := client.Do(req)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error sending request:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+ defer res.Body.Close()
|
|
|
|
|
+
|
|
|
|
|
+ // 处理接口返回数据
|
|
|
|
|
+ body, err := ioutil.ReadAll(res.Body)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error reading response body:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+ type Response struct {
|
|
|
|
|
+ Result struct {
|
|
|
|
|
+ Image string `json:"image"`
|
|
|
|
|
+ Texts []map[string]interface{} `json:"texts"`
|
|
|
|
|
+ } `json:"result"`
|
|
|
|
|
+ }
|
|
|
|
|
+ var respData Response
|
|
|
|
|
+ err = json.Unmarshal([]byte(string(body)), &respData)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error unmarshaling response body:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+
|
|
|
|
|
+ outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error decoding base64 image data:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+ err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
|
|
|
|
|
+ if err != nil {
|
|
|
|
|
+ fmt.Println("Error writing image to file:", err)
|
|
|
|
|
+ return
|
|
|
|
|
+ }
|
|
|
|
|
+ fmt.Printf("Image saved at %s.jpg\n", outputImagePath)
|
|
|
|
|
+ fmt.Println("\nDetected texts:")
|
|
|
|
|
+ for _, text := range respData.Result.Texts {
|
|
|
|
|
+ fmt.Println(text)
|
|
|
|
|
+ }
|
|
|
|
|
+}
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>C#</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-csharp">using System;
|
|
|
|
|
+using System.IO;
|
|
|
|
|
+using System.Net.Http;
|
|
|
|
|
+using System.Net.Http.Headers;
|
|
|
|
|
+using System.Text;
|
|
|
|
|
+using System.Threading.Tasks;
|
|
|
|
|
+using Newtonsoft.Json.Linq;
|
|
|
|
|
+
|
|
|
|
|
+class Program
|
|
|
|
|
+{
|
|
|
|
|
+ static readonly string API_URL = "http://localhost:8080/ocr";
|
|
|
|
|
+ static readonly string imagePath = "./demo.jpg";
|
|
|
|
|
+ static readonly string outputImagePath = "./out.jpg";
|
|
|
|
|
+
|
|
|
|
|
+ static async Task Main(string[] args)
|
|
|
|
|
+ {
|
|
|
|
|
+ var httpClient = new HttpClient();
|
|
|
|
|
+
|
|
|
|
|
+ // 对本地图像进行Base64编码
|
|
|
|
|
+ byte[] imageBytes = File.ReadAllBytes(imagePath);
|
|
|
|
|
+ string image_data = Convert.ToBase64String(imageBytes);
|
|
|
|
|
+
|
|
|
|
|
+ var payload = new JObject{ { "image", image_data } }; // Base64编码的文件内容或者图像URL
|
|
|
|
|
+ var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");
|
|
|
|
|
+
|
|
|
|
|
+ // 调用API
|
|
|
|
|
+ HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
|
|
|
|
|
+ response.EnsureSuccessStatusCode();
|
|
|
|
|
+
|
|
|
|
|
+ // 处理接口返回数据
|
|
|
|
|
+ string responseBody = await response.Content.ReadAsStringAsync();
|
|
|
|
|
+ JObject jsonResponse = JObject.Parse(responseBody);
|
|
|
|
|
+
|
|
|
|
|
+ string base64Image = jsonResponse["result"]["image"].ToString();
|
|
|
|
|
+ byte[] outputImageBytes = Convert.FromBase64String(base64Image);
|
|
|
|
|
+
|
|
|
|
|
+ File.WriteAllBytes(outputImagePath, outputImageBytes);
|
|
|
|
|
+ Console.WriteLine($"Output image saved at {outputImagePath}");
|
|
|
|
|
+ Console.WriteLine("\nDetected texts:");
|
|
|
|
|
+ Console.WriteLine(jsonResponse["result"]["texts"].ToString());
|
|
|
|
|
+ }
|
|
|
|
|
+}
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>Node.js</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-js">const axios = require('axios');
|
|
|
|
|
+const fs = require('fs');
|
|
|
|
|
+
|
|
|
|
|
+const API_URL = 'http://localhost:8080/ocr'
|
|
|
|
|
+const imagePath = './demo.jpg'
|
|
|
|
|
+const outputImagePath = "./out.jpg";
|
|
|
|
|
+
|
|
|
|
|
+let config = {
|
|
|
|
|
+ method: 'POST',
|
|
|
|
|
+ maxBodyLength: Infinity,
|
|
|
|
|
+ url: API_URL,
|
|
|
|
|
+ data: JSON.stringify({
|
|
|
|
|
+ 'image': encodeImageToBase64(imagePath) // Base64编码的文件内容或者图像URL
|
|
|
|
|
+ })
|
|
|
|
|
+};
|
|
|
|
|
+
|
|
|
|
|
+// 对本地图像进行Base64编码
|
|
|
|
|
+function encodeImageToBase64(filePath) {
|
|
|
|
|
+ const bitmap = fs.readFileSync(filePath);
|
|
|
|
|
+ return Buffer.from(bitmap).toString('base64');
|
|
|
|
|
+}
|
|
|
|
|
+
|
|
|
|
|
+// 调用API
|
|
|
|
|
+axios.request(config)
|
|
|
|
|
+.then((response) => {
|
|
|
|
|
+ // 处理接口返回数据
|
|
|
|
|
+ const result = response.data["result"];
|
|
|
|
|
+ const imageBuffer = Buffer.from(result["image"], 'base64');
|
|
|
|
|
+ fs.writeFile(outputImagePath, imageBuffer, (err) => {
|
|
|
|
|
+ if (err) throw err;
|
|
|
|
|
+ console.log(`Output image saved at ${outputImagePath}`);
|
|
|
|
|
+ });
|
|
|
|
|
+ console.log("\nDetected texts:");
|
|
|
|
|
+ console.log(result["texts"]);
|
|
|
|
|
+})
|
|
|
|
|
+.catch((error) => {
|
|
|
|
|
+ console.log(error);
|
|
|
|
|
+});
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+
|
|
|
|
|
+<details><summary>PHP</summary>
|
|
|
|
|
+
|
|
|
|
|
+<pre><code class="language-php"><?php
|
|
|
|
|
+
|
|
|
|
|
+$API_URL = "http://localhost:8080/ocr"; // 服务URL
|
|
|
|
|
+$image_path = "./demo.jpg";
|
|
|
|
|
+$output_image_path = "./out.jpg";
|
|
|
|
|
+
|
|
|
|
|
+// 对本地图像进行Base64编码
|
|
|
|
|
+$image_data = base64_encode(file_get_contents($image_path));
|
|
|
|
|
+$payload = array("image" => $image_data); // Base64编码的文件内容或者图像URL
|
|
|
|
|
+
|
|
|
|
|
+// 调用API
|
|
|
|
|
+$ch = curl_init($API_URL);
|
|
|
|
|
+curl_setopt($ch, CURLOPT_POST, true);
|
|
|
|
|
+curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
|
|
|
|
|
+curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
|
|
|
|
|
+$response = curl_exec($ch);
|
|
|
|
|
+curl_close($ch);
|
|
|
|
|
+
|
|
|
|
|
+// 处理接口返回数据
|
|
|
|
|
+$result = json_decode($response, true)["result"];
|
|
|
|
|
+file_put_contents($output_image_path, base64_decode($result["image"]));
|
|
|
|
|
+echo "Output image saved at " . $output_image_path . "\n";
|
|
|
|
|
+echo "\nDetected texts:\n";
|
|
|
|
|
+print_r($result["texts"]);
|
|
|
|
|
+
|
|
|
|
|
+?>
|
|
|
|
|
+</code></pre></details>
|
|
|
|
|
+</details>
|
|
|
|
|
+<br/>
|
|
|
|
|
+
|
|
|
|
|
+📱 <b>端侧部署</b>:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
|
|
|
|
|
+您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
|
|
|
|
|
+
|
|
|
|
|
+
|
|
|
|
|
+## 4. 二次开发
|
|
|
|
|
+
|
|
|
|
|
+如果人体关键点检测产线提供的默认模型权重在您的场景中精度或速度不满意,您可以尝试利用<b>您自己拥有的特定领域或应用场景的数据</b>对现有模型进行进一步的<b>微调</b>,以提升该产线的在您的场景中的识别效果。
|
|
|
|
|
+
|
|
|
|
|
+### 4.1 模型微调
|
|
|
|
|
+
|
|
|
|
|
+由于人体关键点检测产线包含两个模块(行人检测模块和关键点检测模块),模型产线的效果不及预期可能来自于其中任何一个模块。
|
|
|
|
|
+
|
|
|
|
|
+您可以对识别效果差的图片进行分析,如果在分析过程中发现有较多的行人目标未被检测出来,那么可能是行人检测模型存在不足,您需要参考[行人检测模块开发教程](../../../module_usage/tutorials/cv_modules/human_detection.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/human_detection.md#四二次开发)章节,使用您的私有数据集对行人检测模型进行微调;如果在已检测到行人出现关键点检测错误,这表明关键点检测模型需要进一步改进,您需要参考[关键点检测模块开发教程](../../../module_usage/tutorials/cv_modules/keypoint_detection.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/keypoint_detection.md#四二次开发)章节,对关键点检测模型进行微调。
|
|
|
|
|
+
|
|
|
|
|
+### 4.2 模型应用
|
|
|
|
|
+
|
|
|
|
|
+当您使用私有数据集完成微调训练后,可获得本地模型权重文件。
|
|
|
|
|
+
|
|
|
|
|
+若您需要使用微调后的模型权重,只需对产线配置文件做修改,将微调后模型权重的本地路径替换至产线配置文件中的对应位置即可:
|
|
|
|
|
+
|
|
|
|
|
+```yaml
|
|
|
|
|
+Pipeline:
|
|
|
|
|
+ human_det_model: PP-YOLOE-S_human #可修改为微调后行人检测模型的本地路径
|
|
|
|
|
+ keypoint_det_model: PP-TinyPose_128x96 #可修改为微调后关键点检测模型的本地路径
|
|
|
|
|
+ human_det_batch_size: 1
|
|
|
|
|
+ keypoint_det_batch_size: 1
|
|
|
|
|
+ device: gpu
|
|
|
|
|
+```
|
|
|
|
|
+随后, 参考[2.2 本地体验](#22-本地体验)中的命令行方式或Python脚本方式,加载修改后的产线配置文件即可。
|
|
|
|
|
+
|
|
|
|
|
+## 5. 多硬件支持
|
|
|
|
|
+
|
|
|
|
|
+PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多种主流硬件设备,<b>仅需修改 `--device`参数</b>即可完成不同硬件之间的无缝切换。
|
|
|
|
|
+
|
|
|
|
|
+例如,使用Python运行通用图像识别产线时,将运行设备从英伟达 GPU 更改为昇腾 NPU,仅需将脚本中的 `device` 修改为 npu 即可:
|
|
|
|
|
+
|
|
|
|
|
+```python
|
|
|
|
|
+from paddlex import create_pipeline
|
|
|
|
|
+
|
|
|
|
|
+pipeline = create_pipeline(
|
|
|
|
|
+ pipeline="human_keypoint_detection",
|
|
|
|
|
+ device="npu:0" # gpu:0 --> npu:0
|
|
|
|
|
+ )
|
|
|
|
|
+```
|
|
|
|
|
+
|
|
|
|
|
+若您想在更多种类的硬件上使用通用图像识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。
|