Bladeren bron

Fix typos in multiple files (#3951)

co63oc 6 maanden geleden
bovenliggende
commit
4203eb5398
30 gewijzigde bestanden met toevoegingen van 35 en 35 verwijderingen
  1. 1 1
      README_en.md
  2. 1 1
      docs/data_annotations/cv_modules/image_classification.md
  3. 1 1
      docs/data_annotations/cv_modules/image_feature.md
  4. 1 1
      docs/data_annotations/cv_modules/instance_segmentation.md
  5. 1 1
      docs/data_annotations/cv_modules/ml_classification.md
  6. 1 1
      docs/data_annotations/cv_modules/object_detection.md
  7. 1 1
      docs/data_annotations/cv_modules/semantic_segmentation.md
  8. 1 1
      docs/module_usage/instructions/model_python_API.md
  9. 1 1
      docs/module_usage/tutorials/cv_modules/image_feature.en.md
  10. 1 1
      docs/module_usage/tutorials/cv_modules/image_feature.md
  11. 1 1
      docs/module_usage/tutorials/cv_modules/mainbody_detection.en.md
  12. 1 1
      docs/module_usage/tutorials/cv_modules/mainbody_detection.md
  13. 1 1
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md
  14. 1 1
      docs/other_devices_support/how_to_contribute_device.md
  15. 1 1
      docs/pipeline_usage/instructions/pipeline_python_API.md
  16. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md
  17. 1 1
      docs/pipeline_usage/tutorials/video_pipelines/video_classification.en.md
  18. 1 1
      docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.en.md
  19. 1 1
      docs/practical_tutorials/ts_anomaly_detection.md
  20. 1 1
      docs/support_list/model_list_npu.md
  21. 1 1
      docs/support_list/models_list.md
  22. 2 2
      docs/support_list/pipelines_list.md
  23. 1 1
      docs/support_list/pipelines_list_npu.md
  24. 1 1
      libs/ultra-infer/UltraInfer.cmake.in
  25. 1 1
      libs/ultra-infer/python/setup.py
  26. 2 2
      libs/ultra-infer/python/ultra_infer/__init__.py
  27. 1 1
      libs/ultra-infer/python/ultra_infer/c_lib_wrap.py.in
  28. 3 3
      libs/ultra-infer/python/ultra_infer/download.py
  29. 2 2
      libs/ultra-infer/python/ultra_infer/runtime.py
  30. 1 1
      libs/ultra-infer/scripts/ultra_infer_init.sh

+ 1 - 1
README_en.md

@@ -738,7 +738,7 @@ To use the Python script for other pipelines, simply adjust the `pipeline` param
 * [📑 PaddleX Pipeline Usage Overview](https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/pipeline_develop_guide.html)
 
 * <details open>
-    <summary> <b> 📝 Information Extracion</b></summary>
+    <summary> <b> 📝 Information Extraction</b></summary>
 
    * [📄 PP-ChatOCRv3 Pipeline Tutorial](https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction.html)
   </details>

+ 1 - 1
docs/data_annotations/cv_modules/image_classification.md

@@ -40,7 +40,7 @@ labelme images --nodata --autosave --output annotations --flags flags.txt
 * `flags` 为图像创建分类标签,传入标签路径。
 * `nodata` 停止将图像数据存储到 JSON 文件。
 * `autosave` 自动存储。
-* `ouput` 标签文件存储路径。
+* `output` 标签文件存储路径。
 #### 1.3.3 开始图片标注
 * 启动 `labelme` 后如图所示:
 

+ 1 - 1
docs/data_annotations/cv_modules/image_feature.md

@@ -40,7 +40,7 @@ labelme images --nodata --autosave --output annotations --flags flags.txt
 * `flags` 为图像创建分类标签,传入标签路径。
 * `nodata` 停止将图像数据存储到 JSON 文件。
 * `autosave` 自动存储。
-* `ouput` 标签文件存储路径。
+* `output` 标签文件存储路径。
 #### 1.3.3 开始图片标注
 * 启动 `labelme` 后如图所示:
 

+ 1 - 1
docs/data_annotations/cv_modules/instance_segmentation.md

@@ -52,7 +52,7 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 * `labels` 类别标签路径。
 * `nodata` 停止将图像数据存储到JSON文件。
 * `autosave` 自动存储。
-* `ouput` 标签文件存储路径。
+* `output` 标签文件存储路径。
 #### 2.3.3 开始图片标注
 * 启动 `labelme` 后如图所示:
 

+ 1 - 1
docs/data_annotations/cv_modules/ml_classification.md

@@ -53,7 +53,7 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 * `flags` 为图像创建分类标签,传入标签路径。
 * `nodata` 停止将图像数据存储到 `JSON`文件。
 * `autosave` 自动存储。
-* `ouput` 标签文件存储路径。
+* `output` 标签文件存储路径。
 #### 2.3.3 开始图片标注
 * 启动 `Labelme` 后如图所示:
 

+ 1 - 1
docs/data_annotations/cv_modules/object_detection.md

@@ -45,7 +45,7 @@ labelme images --labels label.txt --nodata --autosave --output annotations
 * `flags` 为图像创建分类标签,传入标签路径。
 * `nodata` 停止将图像数据存储到 `JSON`文件。
 * `autosave` 自动存储。
-* `ouput` 标签文件存储路径。
+* `output` 标签文件存储路径。
 #### 2.3.3 开始图片标注
 * 启动 `Labelme` 后如图所示:
 

+ 1 - 1
docs/data_annotations/cv_modules/semantic_segmentation.md

@@ -54,7 +54,7 @@ labelme images --nodata --autosave --output annotations
 ```
 * `nodata` 停止将图像数据存储到JSON文件
 * `autosave` 自动存储
-* `ouput` 标签文件存储路径
+* `output` 标签文件存储路径
 #### 2.3.3 开始图片标注
 * 启动 `labelme` 后如图所示:
 

+ 1 - 1
docs/module_usage/instructions/model_python_API.md

@@ -103,7 +103,7 @@ PaddleX 支持通过`PaddlePredictorOption`修改推理配置,相关API如下
 
 #### 属性:
 
-* `deivce`:推理设备;
+* `device`:推理设备;
   * 支持设置 `str` 类型表示的推理设备类型及卡号,设备类型支持可选 “gpu”、“cpu”、“npu”、“xpu”、“mlu”、“dcu”,当使用加速卡时,支持指定卡号,如使用 0 号 GPU:`gpu:0`,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
   * 返回值:`str`类型,当前设置的推理设备。
 * `run_mode`:运行模式;

+ 1 - 1
docs/module_usage/tutorials/cv_modules/image_feature.en.md

@@ -513,7 +513,7 @@ The model can be directly integrated into the PaddleX pipeline or directly into
 
 1.<b>Pipeline Integration</b>
 
-The image feature module can be integrated into the <b>General Image Recognition Pipeline</b> (comming soon) of PaddleX. Simply replace the model path to update the image feature module of the relevant pipeline. In pipeline integration, you can use serving deployment to deploy your trained model.
+The image feature module can be integrated into the <b>General Image Recognition Pipeline</b> (coming soon) of PaddleX. Simply replace the model path to update the image feature module of the relevant pipeline. In pipeline integration, you can use serving deployment to deploy your trained model.
 
 2.<b>Module Integration</b>
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/image_feature.md

@@ -512,7 +512,7 @@ python main.py -c paddlex/configs/modules/image_feature/PP-ShiTuV2_rec.yaml  \
 
 1.<b>产线集成</b>
 
-图像特征模块可以集成的 PaddleX 产线有<b>通用图像特征产线</b>(comming soon),只需要替换模型路径即可完成相关产线的图像特征模块的模型更新。在产线集成中,你可以使用服务化部署来部署你得到的模型。
+图像特征模块可以集成的 PaddleX 产线有<b>通用图像特征产线</b>(coming soon),只需要替换模型路径即可完成相关产线的图像特征模块的模型更新。在产线集成中,你可以使用服务化部署来部署你得到的模型。
 
 2.<b>模块集成</b>
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection.en.md

@@ -484,7 +484,7 @@ The model can be directly integrated into the PaddleX pipeline or directly into
 
 1. <b>Pipeline Integration</b>
 
-The main body detection module can be integrated into PaddleX pipelines such as <b>General Object Detection</b> (comming soon). Simply replace the model path to update the main body detection module of the relevant pipeline. In pipeline integration, you can use high-performance inference and serving deployment to deploy your trained model.
+The main body detection module can be integrated into PaddleX pipelines such as <b>General Object Detection</b> (coming soon). Simply replace the model path to update the main body detection module of the relevant pipeline. In pipeline integration, you can use high-performance inference and serving deployment to deploy your trained model.
 
 2. <b>Module Integration</b>
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -479,7 +479,7 @@ python main.py -c paddlex/configs/modules/mainbody_detection/PP-ShiTuV2_det.yaml
 
 1.<b>产线集成</b>
 
-主体检测模块可以集成的PaddleX产线有<b>通用图像识别</b>(comming soon),只需要替换模型路径即可完成相关产线的主体检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+主体检测模块可以集成的PaddleX产线有<b>通用图像识别</b>(coming soon),只需要替换模型路径即可完成相关产线的主体检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 2.<b>模块集成</b>
 

+ 1 - 1
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md

@@ -14,7 +14,7 @@ comments: true
 <thead>
 <tr>
 <th>模型名称</th><th>模型下载链接</th>
-<th>precison</th>
+<th>precision</th>
 <th>recall</th>
 <th>f1_score</th>
 <th>模型存储大小(M)</th>

+ 1 - 1
docs/other_devices_support/how_to_contribute_device.md

@@ -44,7 +44,7 @@
 
 ### 2.1.4 更新Predictor Opiton支持的设备列表
 
-PaddleX创建Predictor时会判断设备是否已支持,相关代码位于 [PaddleX Predictor Opiton](../../paddlex/inference/utils/pp_option.py) 中的 `SUPPORT_DEVICE`
+PaddleX创建Predictor时会判断设备是否已支持,相关代码位于 [PaddleX Predictor Option](../../paddlex/inference/utils/pp_option.py) 中的 `SUPPORT_DEVICE`
 
 ### 2.1.5 更新Predictor Opiton支持的设备列表
 

+ 1 - 1
docs/pipeline_usage/instructions/pipeline_python_API.md

@@ -100,7 +100,7 @@ PaddleX 支持通过`PaddlePredictorOption`修改推理配置,相关API如下
 
 #### 属性:
 
-* `deivce`:推理设备;
+* `device`:推理设备;
   * 支持设置 `str` 类型表示的推理设备类型及卡号,设备类型支持可选 “gpu”、“cpu”、“npu”、“xpu”、“mlu”、“dcu”,当使用加速卡时,支持指定卡号,如使用 0 号 GPU:`gpu:0`,默认情况下,如有 GPU 设置则使用 0 号 GPU,否则使用 CPU;
   * 返回值:`str`类型,当前设置的推理设备。
 * `run_mode`:运行模式;

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md

@@ -17,7 +17,7 @@ comments: true
 <thead>
 <tr>
 <th>模型</th><th>模型下载链接</th>
-<th>precison</th>
+<th>precision</th>
 <th>recall</th>
 <th>f1_score</th>
 <th>模型存储大小(M)</th>

+ 1 - 1
docs/pipeline_usage/tutorials/video_pipelines/video_classification.en.md

@@ -456,7 +456,7 @@ Below are the API references and multi-language service invocation examples for
 <tr>
 <td><code>topk</code></td>
 <td><code>integer</code> | <code>null</code></td>
-<td>Please refer to the description of the <code>topk</code> parameter of the pipeline object's <code>predict</code> mehod.</td>
+<td>Please refer to the description of the <code>topk</code> parameter of the pipeline object's <code>predict</code> method.</td>
 <td>No</td>
 </tr>
 </tbody>

+ 1 - 1
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.en.md

@@ -362,7 +362,7 @@ for res in output:
     res.save_to_img("./output/") # Save the result as a visualized image
     res.save_to_json("./output/") # Save the structured output of the prediction
 ```
-For more parameters, please refer to the [General Instance Segmentation Pipline User Guide](../pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md)。
+For more parameters, please refer to the [General Instance Segmentation Pipeline User Guide](../pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md)。
 
 2. Additionally, PaddleX offers three other deployment methods, detailed as follows:
 

+ 1 - 1
docs/practical_tutorials/ts_anomaly_detection.md

@@ -32,7 +32,7 @@ PaddleX 提供了4个端到端的时序异常检测模型,具体可参考 [模
 <thead>
 <tr>
 <th>模型名称</th>
-<th>precison</th>
+<th>precision</th>
 <th>recall</th>
 <th>f1_score</th>
 <th>模型存储大小(M)</th>

+ 1 - 1
docs/support_list/model_list_npu.md

@@ -1314,7 +1314,7 @@ PaddleX 内置了多条产线,每条产线都包含了若干模块,每个模
 <thead>
 <tr>
 <th>模型名称</th>
-<th>precison</th>
+<th>precision</th>
 <th>recall</th>
 <th>f1_score</th>
 <th>模型存储大小(M)</th>

+ 1 - 1
docs/support_list/models_list.md

@@ -2612,7 +2612,7 @@ devanagari_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="">训练模
 <thead>
 <tr>
 <th>模型名称</th>
-<th>precison</th>
+<th>precision</th>
 <th>recall</th>
 <th>f1_score</th>
 <th>模型存储大小</th>

+ 2 - 2
docs/support_list/pipelines_list.md

@@ -101,7 +101,7 @@ comments: true
   <tr>
     <td rowspan = 8>文档场景信息抽取v4</td>
     <td>表格结构识别</td>
-    <td rowspan = 8>comming soon</td>
+    <td rowspan = 8>coming soon</td>
     <td rowspan = 8>文档场景信息抽取v4(PP-ChatOCRv4)是飞桨特色的文档和图像智能分析解决方案,结合了 LLM、MLLM 和 OCR 技术,在文档场景信息抽取v3的基础上,优化了版面分析、生僻字、多页 pdf、表格、印章识别等常见的复杂文档信息抽取难点问题,结合文心大模型将海量数据和知识相融合,准确率高且应用广泛。本产线同时提供了灵活的服务化部署方式,支持在多种硬件上部署。不仅如此,本产线也提供了二次开发的能力,您可以基于本产线在您自己的数据集上训练调优,训练后的模型也可以无缝集成。
 </td>
     <td rowspan="8">
@@ -326,7 +326,7 @@ comments: true
   <tr>
     <td rowspan = 13>通用版面解析v3</td>
     <td>版面区域检测模块</td>
-    <td rowspan = 13>comming soon</td>
+    <td rowspan = 13>coming soon</td>
     <td rowspan = 13>通用版面解析v3产线在通用版面解析v1产线的基础上,强化了版面区域检测、表格识别、公式识别的能力,增加了多栏阅读顺序的恢复能力、结果转换 Markdown 文件的能力,在多种文档数据中,表现优异,可以处理较复杂的文档数据。本产线同时提供了灵活的服务化部署方式,支持在多种硬件上使用多种编程语言调用。不仅如此,本产线也提供了二次开发的能力,您可以基于本产线在您自己的数据集上训练调优,训练后的模型也可以无缝集成。</td>
     <td rowspan="13">
   <ul>

+ 1 - 1
docs/support_list/pipelines_list_npu.md

@@ -101,7 +101,7 @@ comments: true
   <tr>
     <td rowspan = 8>文档场景信息抽取v4</td>
     <td>表格结构识别</td>
-    <td rowspan = 8>comming soon</td>
+    <td rowspan = 8>coming soon</td>
     <td rowspan = 8>文档场景信息抽取v4(PP-ChatOCRv4)是飞桨特色的文档和图像智能分析解决方案,结合了 LLM、MLLM 和 OCR 技术,在文档场景信息抽取v3的基础上,优化了版面分析、生僻字、多页 pdf、表格、印章识别等常见的复杂文档信息抽取难点问题,结合文心大模型将海量数据和知识相融合,准确率高且应用广泛。本产线同时提供了灵活的服务化部署方式,支持在多种硬件上部署。不仅如此,本产线也提供了二次开发的能力,您可以基于本产线在您自己的数据集上训练调优,训练后的模型也可以无缝集成。
 </td>
     <td rowspan="8">

+ 1 - 1
libs/ultra-infer/UltraInfer.cmake.in

@@ -21,7 +21,7 @@ set(WITH_TESTING @WITH_TESTING@)
 set(BUILD_ON_JETSON @BUILD_ON_JETSON@)
 set(RKNN2_TARGET_SOC "@RKNN2_TARGET_SOC@")
 
-# Inference backend and UltraInfer Moudle
+# Inference backend and UltraInfer Module
 set(ENABLE_ORT_BACKEND @ENABLE_ORT_BACKEND@)
 set(ENABLE_RKNPU2_BACKEND @ENABLE_RKNPU2_BACKEND@)
 set(ENABLE_TVM_BACKEND @ENABLE_TVM_BACKEND@)

+ 1 - 1
libs/ultra-infer/python/setup.py

@@ -11,7 +11,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-# This file refered to github.com/onnx/onnx.git
+# This file referred to github.com/onnx/onnx.git
 
 from __future__ import absolute_import, division, print_function, unicode_literals
 

+ 2 - 2
libs/ultra-infer/python/ultra_infer/__init__.py

@@ -41,7 +41,7 @@ if os.name != "nt" and os.path.exists(trt_directory):
                 logging.warning(
                     f"Failed to create a symbolic link pointing to {src} by an unprivileged user. "
                     "It may failed when you use Paddle TensorRT backend. "
-                    "Please use administator privilege to import ultra_infer at first time."
+                    "Please use administrator privilege to import ultra_infer at first time."
                 )
                 break
 
@@ -55,7 +55,7 @@ from .code_version import version, git_version, extra_version_info
 from .code_version import enable_trt_backend, enable_paddle_backend, with_gpu
 
 # Note(zhoushunjie): Fix the import order of paddle and ultra_infer library.
-# This solution will be removed it when the confilct of paddle and
+# This solution will be removed it when the conflict of paddle and
 # ultra_infer is fixed.
 
 # Note(qiuyanjun): Add backward compatible for paddle 2.4.x

+ 1 - 1
libs/ultra-infer/python/ultra_infer/c_lib_wrap.py.in

@@ -167,7 +167,7 @@ if os.name == "nt":
 try:
     from .libs.@PY_LIBRARY_NAME@ import *
 except Exception as e:
-    raise RuntimeError(f"UltraInfer initalized failed! Error: {e}")
+    raise RuntimeError(f"UltraInfer initialized failed! Error: {e}")
 
 
 def TensorInfoStr(tensor_info):

+ 3 - 3
libs/ultra-infer/python/ultra_infer/download.py

@@ -104,7 +104,7 @@ def download(url, path, rename=None, md5sum=None, show_progress=False):
                 "{}!".format(url, req.status_code)
             )
 
-        # For protecting download interupted, download to
+        # For protecting download interrupted, download to
         # tmp_fullname firstly, move tmp_fullname to fullname
         # after download finished
         tmp_fullname = fullname + "_tmp"
@@ -133,7 +133,7 @@ def decompress(fname):
     """
     logging.info("Decompressing {}...".format(fname))
 
-    # For protecting decompressing interupted,
+    # For protecting decompressing interrupted,
     # decompress to fpath_tmp directory firstly, if decompress
     # successed, move decompress files to fpath and delete
     # fpath_tmp and remove download compress file.
@@ -183,7 +183,7 @@ def decompress(fname):
 
 def url2dir(url, path, rename=None):
     full_name = download(url, path, rename, show_progress=True)
-    print("File is donwloaded, now extracting...")
+    print("File is downloaded, now extracting...")
     if url.count(".tgz") > 0 or url.count(".tar") > 0 or url.count("zip") > 0:
         return decompress(full_name)
 

+ 2 - 2
libs/ultra-infer/python/ultra_infer/runtime.py

@@ -471,7 +471,7 @@ class RuntimeOption:
 
         :param tensor_name: (str)Name of input which has dynamic shape
         :param min_shape: (list of int)Minimum shape of the input, e.g [1, 3, 224, 224]
-        :param opt_shape: (list of int)Optimize shape of the input, this offten set as the most common input shape, if set to None, it will keep same with min_shape
+        :param opt_shape: (list of int)Optimize shape of the input, this often set as the most common input shape, if set to None, it will keep same with min_shape
         :param max_shape: (list of int)Maximum shape of the input, e.g [8, 3, 224, 224], if set to None, it will keep same with the min_shape
         """
         logging.warning(
@@ -539,7 +539,7 @@ class RuntimeOption:
         self._option.trt_option.enable_fp16 = False
 
     def enable_pinned_memory(self):
-        """Enable pinned memory. Pinned memory can be utilized to speedup the data transfer between CPU and GPU. Currently it's only suppurted in TRT backend and Paddle Inference backend."""
+        """Enable pinned memory. Pinned memory can be utilized to speedup the data transfer between CPU and GPU. Currently it's only supported in TRT backend and Paddle Inference backend."""
         return self._option.enable_pinned_memory()
 
     def disable_pinned_memory(self):

+ 1 - 1
libs/ultra-infer/scripts/ultra_infer_init.sh

@@ -31,7 +31,7 @@ for DYLIB_FILE in $ALL_DYLIB_FILES;do
     LIBS_DIRECTORIES+=(${DYLIB_FILE%/*})
 done
 
-# Remove the dumplicate directories
+# Remove the duplicate directories
 LIBS_DIRECTORIES=($(awk -v RS=' ' '!a[$1]++' <<< ${LIBS_DIRECTORIES[@]}))
 
 # Print the dynamic library location and output the configuration file