Explorar o código

Merge pull request #3828 from myhloli/dev

Dev
Xiaomeng Zhao hai 3 semanas
pai
achega
ae084eb317
Modificáronse 4 ficheiros con 11 adicións e 6 borrados
  1. 1 1
      README.md
  2. 1 1
      README_zh-CN.md
  3. 4 1
      mineru/backend/vlm/vlm_analyze.py
  4. 5 3
      mineru/model/vlm_vllm_model/server.py

+ 1 - 1
README.md

@@ -44,7 +44,7 @@
 </div>
 
 # Changelog
-- 2025/10/24 2.6.0 Release
+- 2025/10/24 2.6.1 Release
   - `pipeline` backend optimizations
     - Added experimental support for Chinese formulas, which can be enabled by setting the environment variable `export MINERU_FORMULA_CH_SUPPORT=1`. This feature may cause a slight decrease in MFR speed and failures in recognizing some long formulas. It is recommended to enable it only when parsing Chinese formulas is needed. To disable this feature, set the environment variable to `0`.
     - `OCR` speed significantly improved by 200%~300%, thanks to the optimization solution provided by [@cjsdurj](https://github.com/cjsdurj)

+ 1 - 1
README_zh-CN.md

@@ -44,7 +44,7 @@
 </div>
 
 # 更新记录
-- 2025/10/24 2.6.0 发布
+- 2025/10/24 2.6.1 发布
   - `pipline`后端优化
     - 增加对中文公式的实验性支持,可通过配置环境变量`export MINERU_FORMULA_CH_SUPPORT=1`开启。该功能可能会导致MFR速率略微下降、部分长公式识别失败等问题,建议仅在需要解析中文公式的场景下开启。如需关闭该功能,可将环境变量设置为`0`。
     - `OCR`速度大幅提升200%~300%,感谢 [@cjsdurj](https://github.com/cjsdurj) 提供的优化方案

+ 4 - 1
mineru/backend/vlm/vlm_analyze.py

@@ -76,7 +76,10 @@ class ModelSingleton:
                     if batch_size == 0:
                         batch_size = set_defult_batch_size()
                 else:
-                    os.environ["OMP_NUM_THREADS"] = "1"
+
+                    if os.getenv('OMP_NUM_THREADS') is None:
+                        os.environ["OMP_NUM_THREADS"] = "1"
+
                     if backend == "vllm-engine":
                         try:
                             import vllm

+ 5 - 3
mineru/model/vlm_vllm_model/server.py

@@ -1,7 +1,7 @@
 import os
 import sys
 
-from mineru.backend.vlm.custom_logits_processors import enable_custom_logits_processors
+from mineru.backend.vlm.utils import set_defult_gpu_memory_utilization, enable_custom_logits_processors
 from mineru.utils.models_download_utils import auto_download_and_get_model_root_path
 
 from vllm.entrypoints.cli.main import main as vllm_main
@@ -43,7 +43,8 @@ def main():
     if not has_port_arg:
         args.extend(["--port", "30000"])
     if not has_gpu_memory_utilization_arg:
-        args.extend(["--gpu-memory-utilization", "0.7"])
+        gpu_memory_utilization = str(set_defult_gpu_memory_utilization())
+        args.extend(["--gpu-memory-utilization", gpu_memory_utilization])
     if not model_path:
         model_path = auto_download_and_get_model_root_path("/", "vlm")
     if (not has_logits_processors_arg) and custom_logits_processors:
@@ -52,7 +53,8 @@ def main():
     # 重构参数,将模型路径作为位置参数
     sys.argv = [sys.argv[0]] + ["serve", model_path] + args
 
-    os.environ["OMP_NUM_THREADS"] = "1"
+    if os.getenv('OMP_NUM_THREADS') is None:
+        os.environ["OMP_NUM_THREADS"] = "1"
 
     # 启动vllm服务器
     print(f"start vllm server: {sys.argv}")