Jelajahi Sumber

feat: Add service daemon scripts for various OCR tools

- Introduced new daemon scripts for MinerU vLLM, PaddleOCR-VL vLLM, DotsOCR vLLM, and PP-StructureV3.
- Each script includes functionality to start, stop, restart, check status, view logs, and test API endpoints.
- Added a comprehensive README to document the usage, configuration, and service management for all scripts.
- Ensured proper logging and configuration management for each service, enhancing usability and maintainability.
zhch158_admin 1 Minggu lalu
induk
melakukan
f5e90f19ad

+ 341 - 0
ocr_tools/daemons/README.md

@@ -0,0 +1,341 @@
+# 服务端守护进程脚本
+
+本目录包含所有 OCR 工具的服务端守护进程脚本,用于启动和管理 HTTP API 服务。
+
+## 概述
+
+这些守护进程脚本用于在服务器上启动和管理各种 OCR 服务的 HTTP API 端点。客户端工具(位于 `ocr_tools/*_tool/` 目录)可以通过这些 API 端点调用远程服务进行文档处理。
+
+## 脚本列表
+
+| 脚本文件 | 服务类型 | 默认端口 | 服务 URL |
+|---------|---------|---------|---------|
+| `mineru_vllm_daemon.sh` | MinerU vLLM | 8121 | http://localhost:8121 |
+| `ppstructure_v3_daemon.sh` | PP-StructureV3 API | 8111 | http://localhost:8111/layout-parsing |
+| `paddle_vllm_daemon.sh` | PaddleOCR-VL vLLM | 8110 | http://localhost:8110 |
+| `dotsocr_vllm_daemon.sh` | DotsOCR vLLM | 8101 | http://localhost:8101 |
+
+## 脚本与客户端工具映射
+
+| 服务端脚本 | 客户端工具 | 服务类型 | 默认端口 | API 端点 |
+|-----------|----------|---------|---------|---------|
+| `mineru_vllm_daemon.sh` | `mineru_vl_tool/main.py` | MinerU vLLM | 8121 | http://localhost:8121 |
+| `ppstructure_v3_daemon.sh` | `ppstructure_tool/api_client.py` | PP-StructureV3 API | 8111 | http://localhost:8111/layout-parsing |
+| `paddle_vllm_daemon.sh` | `paddle_vl_tool/main.py` | PaddleOCR-VL vLLM | 8110 | http://localhost:8110 |
+| `dotsocr_vllm_daemon.sh` | `dots.ocr_vl_tool/main.py` | DotsOCR vLLM | 8101 | http://localhost:8101 |
+
+## 快速开始
+
+### 基本使用
+
+所有脚本都支持以下命令:
+
+```bash
+# 启动服务
+./script_name.sh start
+
+# 停止服务
+./script_name.sh stop
+
+# 重启服务
+./script_name.sh restart
+
+# 查看服务状态
+./script_name.sh status
+
+# 查看日志(实时)
+./script_name.sh logs
+
+# 查看配置
+./script_name.sh config
+
+# 测试 API(如果支持)
+./script_name.sh test
+```
+
+### 示例
+
+```bash
+# 启动 MinerU vLLM 服务
+cd ocr_tools/daemons
+./mineru_vllm_daemon.sh start
+
+# 查看服务状态
+./mineru_vllm_daemon.sh status
+
+# 查看日志
+./mineru_vllm_daemon.sh logs
+```
+
+## 各脚本详细说明
+
+### 1. mineru_vllm_daemon.sh
+
+**服务类型**:MinerU vLLM 服务
+
+**配置参数**:
+- `CONDA_ENV`: conda 环境名称(默认: `mineru2`)
+- `PORT`: 服务端口(默认: `8121`)
+- `HOST`: 服务主机(默认: `0.0.0.0`)
+- `MODEL_PATH`: 模型路径
+- `MODEL_NAME`: 模型名称(默认: `MinerU2.5`)
+- `GPU_MEMORY_UTILIZATION`: GPU 内存利用率(默认: `0.3`)
+- `CUDA_VISIBLE_DEVICES`: 可见的 CUDA 设备(默认: `4`)
+
+**启动方法**:
+```bash
+./mineru_vllm_daemon.sh start
+```
+
+**服务 URL**:
+- API 端点: `http://localhost:8121`
+- API 文档: `http://localhost:8121/docs`
+
+**依赖环境**:
+- conda 环境: `mineru2`
+- 需要安装: `mineru-vllm-server`
+
+**客户端使用**:
+```bash
+# 使用 mineru_vl_tool 调用服务
+cd ../mineru_vl_tool
+python main.py --input document.pdf --output_dir ./output --server_url http://localhost:8121
+```
+
+### 2. ppstructure_v3_daemon.sh
+
+**服务类型**:PP-StructureV3 API 服务
+
+**配置参数**:
+- `CONDA_ENV`: conda 环境名称(默认: `paddle`)
+- `PORT`: 服务端口(默认: `8111`)
+- `CUDA_VISIBLE_DEVICES`: 可见的 CUDA 设备(默认: `7`)
+- `SCRIPT_DIR`: PaddleX 脚本目录
+- `PIPELINE_CONFIG`: Pipeline 配置文件路径
+
+**启动方法**:
+```bash
+./ppstructure_v3_daemon.sh start
+```
+
+**服务 URL**:
+- API 端点: `http://localhost:8111/layout-parsing`
+- API 文档: `http://localhost:8111/docs`
+
+**依赖环境**:
+- conda 环境: `paddle`
+- 需要安装: `paddlex`
+- 需要脚本: `start_paddlex_with_adapter.py`(位于 `ocr_tools/paddle_common/` 目录)
+
+**配置说明**:
+- 脚本中的 `PADDLE_COMMON_DIR` 需要根据实际部署环境设置为 `ocr_platform/ocr_tools/paddle_common` 的绝对路径
+- Pipeline 配置文件可以使用 `paddle_common/config/` 中的配置文件
+
+**客户端使用**:
+```bash
+# 使用 ppstructure_tool/api_client.py 调用服务
+cd ../ppstructure_tool
+python api_client.py --input document.pdf --output_dir ./output --api_url http://localhost:8111/layout-parsing
+```
+
+### 3. paddle_vllm_daemon.sh
+
+**服务类型**:PaddleOCR-VL vLLM 服务
+
+**配置参数**:
+- `CONDA_ENV`: conda 环境名称(默认: `paddle`)
+- `PORT`: 服务端口(默认: `8110`)
+- `HOST`: 服务主机(默认: `0.0.0.0`)
+- `MODEL_NAME`: 模型名称(默认: `PaddleOCR-VL-0.9B`)
+- `BACKEND`: 后端类型(默认: `vllm`)
+- `GPU_MEMORY_UTILIZATION`: GPU 内存利用率(默认: `0.3`)
+- `CUDA_VISIBLE_DEVICES`: 可见的 CUDA 设备(默认: `3`)
+
+**启动方法**:
+```bash
+./paddle_vllm_daemon.sh start
+```
+
+**服务 URL**:
+- API 端点: `http://localhost:8110`
+- API 文档: `http://localhost:8110/docs`
+
+**依赖环境**:
+- conda 环境: `paddle`
+- 需要安装: `paddlex` 和 `genai-vllm-server` 插件
+  ```bash
+  paddlex --install genai-vllm-server
+  ```
+
+**客户端使用**:
+```bash
+# 使用 paddle_vl_tool 调用服务(通过配置文件指定 vLLM 服务器)
+cd ../paddle_vl_tool
+python main.py --input document.pdf --output_dir ./output \
+  --pipeline ../paddle_common/config/PaddleOCR-VL-Client.yaml
+```
+
+### 4. dotsocr_vllm_daemon.sh
+
+**服务类型**:DotsOCR vLLM 服务
+
+**配置参数**:
+- `CONDA_ENV`: conda 环境名称(默认: `dots.ocr`)
+- `PORT`: 服务端口(默认: `8101`)
+- `HOST`: 服务主机(默认: `0.0.0.0`)
+- `HF_MODEL_PATH`: HuggingFace 模型路径
+- `MODEL_NAME`: 模型名称(默认: `DotsOCR`)
+- `GPU_MEMORY_UTILIZATION`: GPU 内存利用率(默认: `0.70`)
+- `CUDA_VISIBLE_DEVICES`: 可见的 CUDA 设备(默认: `1,2`)
+- `DATA_PARALLEL_SIZE`: 数据并行大小(默认: `2`)
+
+**启动方法**:
+```bash
+./dotsocr_vllm_daemon.sh start
+```
+
+**服务 URL**:
+- API 端点: `http://localhost:8101`
+- API 文档: `http://localhost:8101/docs`
+
+**依赖环境**:
+- conda 环境: `dots.ocr`
+- 需要安装: `vllm`
+- 需要模型: DotsOCR 模型文件
+
+**客户端使用**:
+```bash
+# 使用 dots.ocr_vl_tool 调用服务
+cd ../dots.ocr_vl_tool
+python main.py --input document.pdf --output_dir ./output --ip localhost --port 8101
+```
+
+## 部署建议
+
+### 1. 环境准备
+
+- 确保所有依赖的 conda 环境已正确安装
+- 确保模型文件已下载并放置在正确位置
+- 确保 GPU 驱动和 CUDA 已正确安装
+
+### 2. 配置调整
+
+在部署前,请根据实际环境调整脚本中的配置参数:
+
+- **日志目录**:默认使用 `/home/ubuntu/zhch/logs`,可根据需要修改
+- **conda 路径**:根据实际 conda 安装路径调整
+- **模型路径**:确保模型路径正确
+- **GPU 设备**:根据实际 GPU 配置调整 `CUDA_VISIBLE_DEVICES`
+- **端口号**:确保端口未被占用
+
+### 3. 权限设置
+
+```bash
+# 为所有脚本添加执行权限
+chmod +x *.sh
+```
+
+### 4. 使用 systemd 管理(推荐)
+
+可以创建 systemd 服务文件来管理这些守护进程,实现自动启动和重启:
+
+```ini
+[Unit]
+Description=MinerU vLLM Service
+After=network.target
+
+[Service]
+Type=forking
+User=ubuntu
+WorkingDirectory=/path/to/ocr_platform/ocr_tools/daemons
+ExecStart=/path/to/ocr_platform/ocr_tools/daemons/mineru_vllm_daemon.sh start
+ExecStop=/path/to/ocr_platform/ocr_tools/daemons/mineru_vllm_daemon.sh stop
+Restart=always
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 5. 日志管理
+
+所有服务的日志文件位于 `/home/ubuntu/zhch/logs/` 目录:
+
+- `mineru_vllm.log` - MinerU vLLM 服务日志
+- `ppstructurev3.log` - PP-StructureV3 服务日志
+- `paddleocr_vl_vllm.log` - PaddleOCR-VL vLLM 服务日志
+- `vllm.log` - DotsOCR vLLM 服务日志
+
+建议定期清理或轮转日志文件。
+
+## 故障排查
+
+### 问题:服务启动失败
+
+**可能原因**:
+1. conda 环境未正确激活
+2. 依赖包未安装
+3. 模型文件不存在
+4. 端口已被占用
+5. GPU 不可用或配置错误
+
+**解决方法**:
+1. 使用 `./script_name.sh config` 检查配置
+2. 检查 conda 环境是否正确激活:`conda env list`
+3. 检查依赖是否安装:`which python`, `which mineru-vllm-server` 等
+4. 检查模型路径是否存在
+5. 检查端口占用:`netstat -tuln | grep :PORT`
+6. 检查 GPU 状态:`nvidia-smi`
+
+### 问题:API 无响应
+
+**可能原因**:
+1. 服务未正常启动
+2. 服务正在启动中(需要等待)
+3. 网络连接问题
+4. 防火墙阻止
+
+**解决方法**:
+1. 使用 `./script_name.sh status` 检查服务状态
+2. 查看日志:`./script_name.sh logs`
+3. 等待服务完全启动(通常需要几分钟)
+4. 检查防火墙设置
+
+### 问题:GPU 内存不足
+
+**可能原因**:
+1. GPU 内存利用率设置过高
+2. 多个服务同时运行占用 GPU
+3. GPU 显存不足
+
+**解决方法**:
+1. 降低 `GPU_MEMORY_UTILIZATION` 参数
+2. 使用不同的 GPU 运行不同服务(调整 `CUDA_VISIBLE_DEVICES`)
+3. 关闭其他占用 GPU 的程序
+
+### 问题:模型注册失败(DotsOCR)
+
+**可能原因**:
+1. vLLM 未正确安装
+2. DotsOCR 模型路径不正确
+3. PYTHONPATH 设置错误
+
+**解决方法**:
+1. 检查 vLLM 安装:`which vllm`
+2. 检查模型路径:`ls -la $HF_MODEL_PATH`
+3. 检查 PYTHONPATH:`echo $PYTHONPATH`
+
+## 相关文档
+
+- [MinerU vL Tool README](../mineru_vl_tool/README.md)
+- [PP-StructureV3 Tool README](../ppstructure_tool/README.md)
+- [PaddleOCR-VL Tool README](../paddle_vl_tool/README.md)
+- [DotsOCR vL Tool README](../dots.ocr_vl_tool/README.md)
+
+## 注意事项
+
+1. **路径配置**:脚本中的路径(如模型路径、日志路径)需要根据实际部署环境调整
+2. **端口冲突**:确保不同服务使用不同的端口,避免冲突
+3. **GPU 资源**:合理分配 GPU 资源,避免多个服务竞争同一 GPU
+4. **日志管理**:定期清理日志文件,避免磁盘空间不足
+5. **服务监控**:建议使用监控工具(如 systemd、supervisor)管理服务,确保服务稳定运行
+

+ 353 - 0
ocr_tools/daemons/dotsocr_vllm_daemon.sh

@@ -0,0 +1,353 @@
+#!/bin/bash
+# filepath: ocr_platform/ocr_tools/daemons/dotsocr_vllm_daemon.sh
+# 对应客户端工具: ocr_tools/dots.ocr_vl_tool/main.py
+
+# DotsOCR vLLM 服务守护进程脚本
+
+LOGDIR="/home/ubuntu/zhch/logs"
+mkdir -p $LOGDIR
+PIDFILE="$LOGDIR/vllm.pid"
+LOGFILE="$LOGDIR/vllm.log"
+
+# 配置参数
+CONDA_ENV="dots.ocr"
+PORT="8101"
+HOST="0.0.0.0"
+HF_MODEL_PATH="/home/ubuntu/zhch/dots.ocr/weights/DotsOCR"
+MODEL_NAME="DotsOCR"
+
+# GPU 配置
+GPU_MEMORY_UTILIZATION="0.70"
+CUDA_VISIBLE_DEVICES="1,2"
+DATA_PARALLEL_SIZE="2"  # 2个GPU
+MAX_MODEL_LEN="32768"
+MAX_NUM_BATCHED_TOKENS="32768"
+MAX_NUM_SEQS="16"
+
+# 正确初始化和激活conda环境
+if [ -f "/home/ubuntu/anaconda3/etc/profile.d/conda.sh" ]; then
+    source /home/ubuntu/anaconda3/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+elif [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
+    source /opt/conda/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+else
+    # 方法2:直接使用conda可执行文件路径
+    echo "Warning: Using direct conda path activation"
+    export PATH="/home/ubuntu/anaconda3/envs/$CONDA_ENV/bin:$PATH"
+fi
+
+# 设置环境变量
+export PYTHONPATH=$(dirname "$HF_MODEL_PATH"):$PYTHONPATH
+export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"
+export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
+
+# 注册 DotsOCR 模型到 vLLM
+register_model() {
+    echo "🔧 注册 DotsOCR 模型到 vLLM..."
+    vllm_path=$(which vllm)
+    if [ -z "$vllm_path" ]; then
+        echo "❌ vLLM 未找到,请检查安装和环境激活"
+        return 1
+    fi
+    
+    if ! grep -q "from DotsOCR import modeling_dots_ocr_vllm" "$vllm_path"; then
+        sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
+from DotsOCR import modeling_dots_ocr_vllm' "$vllm_path"
+        echo "✅ DotsOCR 模型已注册到 vLLM"
+    else
+        echo "✅ DotsOCR 模型已经注册过了"
+    fi
+}
+
+start() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "vLLM DotsOCR is already running"
+        return 1
+    fi
+    
+    echo "Starting vLLM DotsOCR daemon..."
+    echo "Host: $HOST, Port: $PORT"
+    echo "Model path: $HF_MODEL_PATH"
+    echo "GPU memory utilization: $GPU_MEMORY_UTILIZATION"
+    echo "Data parallel size: $DATA_PARALLEL_SIZE"
+    
+    # 检查模型文件是否存在
+    if [ ! -d "$HF_MODEL_PATH" ]; then
+        echo "❌ Model path not found: $HF_MODEL_PATH"
+        return 1
+    fi
+    
+    # 检查conda环境
+    if ! command -v python >/dev/null 2>&1; then
+        echo "❌ Python not found. Check conda environment activation."
+        return 1
+    fi
+    
+    # 检查vllm命令
+    if ! command -v vllm >/dev/null 2>&1; then
+        echo "❌ vLLM not found. Check installation and environment."
+        return 1
+    fi
+    
+    echo "🔧 Using Python: $(which python)"
+    echo "🔧 Using vLLM: $(which vllm)"
+    
+    # 注册模型
+    register_model
+    if [ $? -ne 0 ]; then
+        echo "❌ Model registration failed"
+        return 1
+    fi
+    
+    # 显示GPU状态
+    echo "📊 GPU 状态检查:"
+    if command -v nvidia-smi >/dev/null 2>&1; then
+        nvidia-smi --query-gpu=index,name,memory.used,memory.total --format=csv,noheader,nounits | \
+        awk -F',' '{printf "  GPU %s: %s - 内存: %sMB/%sMB\n", $1, $2, $3, $4}'
+    else
+        echo "⚠️  nvidia-smi not available"
+    fi
+    
+    # 启动vLLM服务
+    CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES nohup vllm serve $HF_MODEL_PATH \
+        --host $HOST \
+        --port $PORT \
+        --gpu-memory-utilization $GPU_MEMORY_UTILIZATION \
+        --max-log-len 1000 \
+        --trust-remote-code \
+        --max-model-len $MAX_MODEL_LEN \
+        --max-num-batched-tokens $MAX_NUM_BATCHED_TOKENS \
+        --uvicorn-log-level info \
+        --limit-mm-per-prompt '{"image": 1}' \
+        --chat-template-content-format string \
+        --data-parallel-size $DATA_PARALLEL_SIZE \
+        --max-num-seqs $MAX_NUM_SEQS \
+        --enable-prefix-caching \
+        --served-model-name $MODEL_NAME \
+        > $LOGFILE 2>&1 &
+    
+    echo $! > $PIDFILE
+    echo "✅ vLLM DotsOCR started with PID: $(cat $PIDFILE)"
+    echo "📋 Log file: $LOGFILE"
+    echo "🌐 Service URL: http://$HOST:$PORT"
+    echo "📖 API Documentation: http://localhost:$PORT/docs"
+}
+
+stop() {
+    if [ ! -f $PIDFILE ]; then
+        echo "vLLM DotsOCR is not running"
+        return 1
+    fi
+    
+    PID=$(cat $PIDFILE)
+    echo "Stopping vLLM DotsOCR (PID: $PID)..."
+    
+    # 优雅停止
+    kill $PID
+    
+    # 等待进程结束
+    for i in {1..10}; do
+        if ! kill -0 $PID 2>/dev/null; then
+            break
+        fi
+        echo "Waiting for process to stop... ($i/10)"
+        sleep 1
+    done
+    
+    # 如果进程仍在运行,强制结束
+    if kill -0 $PID 2>/dev/null; then
+        echo "Force killing process..."
+        kill -9 $PID
+    fi
+    
+    rm -f $PIDFILE
+    echo "✅ vLLM DotsOCR stopped"
+}
+
+status() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        PID=$(cat $PIDFILE)
+        echo "✅ vLLM DotsOCR is running (PID: $PID)"
+        echo "🌐 Service URL: http://$HOST:$PORT"
+        echo "📋 Log file: $LOGFILE"
+        
+        # 检查端口是否被监听
+        if command -v ss >/dev/null 2>&1; then
+            if ss -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        elif command -v netstat >/dev/null 2>&1; then
+            if netstat -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        fi
+        
+        # 检查API响应
+        if command -v curl >/dev/null 2>&1; then
+            if curl -s --connect-timeout 2 http://127.0.0.1:$PORT/v1/models > /dev/null 2>&1; then
+                echo "🎯 API 响应正常"
+            else
+                echo "⚠️  API 无响应 (service may be starting up)"
+            fi
+        fi
+        
+        # 显示GPU使用情况
+        if command -v nvidia-smi >/dev/null 2>&1; then
+            echo "📊 GPU 使用情况:"
+            nvidia-smi --query-gpu=index,utilization.gpu,utilization.memory,memory.used,memory.total --format=csv,noheader,nounits | \
+            awk -F',' '{printf "  GPU %s: GPU利用率 %s%%, 内存利用率 %s%%, 显存 %sMB/%sMB\n", $1, $2, $3, $4, $5}'
+        fi
+        
+        # 显示最新日志
+        if [ -f $LOGFILE ]; then
+            echo "📄 Latest logs (last 3 lines):"
+            tail -3 $LOGFILE | sed 's/^/  /'
+        fi
+    else
+        echo "❌ vLLM DotsOCR is not running"
+        if [ -f $PIDFILE ]; then
+            echo "Removing stale PID file..."
+            rm -f $PIDFILE
+        fi
+    fi
+}
+
+logs() {
+    if [ -f $LOGFILE ]; then
+        echo "📄 vLLM DotsOCR logs:"
+        echo "=================="
+        tail -f $LOGFILE
+    else
+        echo "❌ Log file not found: $LOGFILE"
+    fi
+}
+
+config() {
+    echo "📋 Current configuration:"
+    echo "  Conda Environment: $CONDA_ENV"
+    echo "  Host: $HOST"
+    echo "  Port: $PORT"
+    echo "  Model Path: $HF_MODEL_PATH"
+    echo "  Model Name: $MODEL_NAME"
+    echo "  GPU Memory Utilization: $GPU_MEMORY_UTILIZATION"
+    echo "  Data Parallel Size: $DATA_PARALLEL_SIZE"
+    echo "  Max Model Length: $MAX_MODEL_LEN"
+    echo "  Max Num Seqs: $MAX_NUM_SEQS"
+    echo "  PID File: $PIDFILE"
+    echo "  Log File: $LOGFILE"
+    
+    if [ -d "$HF_MODEL_PATH" ]; then
+        echo "✅ Model path exists"
+        echo "  Model files:"
+        ls -la "$HF_MODEL_PATH" | head -10 | sed 's/^/    /'
+        if [ $(ls -1 "$HF_MODEL_PATH" | wc -l) -gt 10 ]; then
+            echo "    ... and more files"
+        fi
+    else
+        echo "❌ Model path not found"
+    fi
+    
+    # 显示环境信息
+    echo ""
+    echo "🔧 Environment:"
+    echo "  Python: $(which python 2>/dev/null || echo 'Not found')"
+    echo "  vLLM: $(which vllm 2>/dev/null || echo 'Not found')"
+    echo "  Conda: $(which conda 2>/dev/null || echo 'Not found')"
+    echo "  CUDA: $(which nvcc 2>/dev/null || echo 'Not found')"
+    
+    # 显示GPU信息
+    if command -v nvidia-smi >/dev/null 2>&1; then
+        echo ""
+        echo "🔥 GPU Information:"
+        nvidia-smi --query-gpu=index,name,driver_version,memory.total --format=csv,noheader,nounits | \
+        awk -F',' '{printf "  GPU %s: %s (Driver: %s, Memory: %sMB)\n", $1, $2, $3, $4}'
+    fi
+}
+
+test_api() {
+    echo "🧪 Testing vLLM DotsOCR API..."
+    
+    if [ ! -f $PIDFILE ] || ! kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "❌ vLLM service is not running"
+        return 1
+    fi
+    
+    if ! command -v curl >/dev/null 2>&1; then
+        echo "❌ curl command not found"
+        return 1
+    fi
+    
+    echo "📡 Testing /v1/models endpoint..."
+    response=$(curl -s --connect-timeout 5 http://127.0.0.1:$PORT/v1/models)
+    if [ $? -eq 0 ]; then
+        echo "✅ Models endpoint accessible"
+        echo "$response" | python -m json.tool 2>/dev/null || echo "$response"
+    else
+        echo "❌ Models endpoint not accessible"
+    fi
+}
+
+# 显示使用帮助
+usage() {
+    echo "vLLM DotsOCR Service Daemon"
+    echo "============================"
+    echo "Usage: $0 {start|stop|restart|status|logs|config|test}"
+    echo ""
+    echo "Commands:"
+    echo "  start   - Start the vLLM DotsOCR service"
+    echo "  stop    - Stop the vLLM DotsOCR service"
+    echo "  restart - Restart the vLLM DotsOCR service"
+    echo "  status  - Show service status and resource usage"
+    echo "  logs    - Show service logs (follow mode)"
+    echo "  config  - Show current configuration"
+    echo "  test    - Test API endpoints"
+    echo ""
+    echo "Configuration (edit script to modify):"
+    echo "  Host: $HOST"
+    echo "  Port: $PORT"
+    echo "  Model: $HF_MODEL_PATH"
+    echo "  GPU Memory: $GPU_MEMORY_UTILIZATION"
+    echo "  Parallel Size: $DATA_PARALLEL_SIZE"
+    echo ""
+    echo "Examples:"
+    echo "  ./dotsocr_vllm_daemon.sh start"
+    echo "  ./dotsocr_vllm_daemon.sh status"
+    echo "  ./dotsocr_vllm_daemon.sh logs"
+    echo "  ./dotsocr_vllm_daemon.sh test"
+}
+
+case "$1" in
+    start)
+        start
+        ;;
+    stop)
+        stop
+        ;;
+    restart)
+        stop
+        sleep 3
+        start
+        ;;
+    status)
+        status
+        ;;
+    logs)
+        logs
+        ;;
+    config)
+        config
+        ;;
+    test)
+        test_api
+        ;;
+    *)
+        usage
+        exit 1
+        ;;
+esac
+

+ 393 - 0
ocr_tools/daemons/mineru_vllm_daemon.sh

@@ -0,0 +1,393 @@
+#!/bin/bash
+# filepath: ocr_platform/ocr_tools/daemons/mineru_vllm_daemon.sh
+# 对应客户端工具: ocr_tools/mineru_vl_tool/main.py
+
+# MinerU vLLM 服务守护进程脚本
+
+LOGDIR="/home/ubuntu/zhch/logs"
+mkdir -p $LOGDIR
+PIDFILE="$LOGDIR/mineru_vllm.pid"
+LOGFILE="$LOGDIR/mineru_vllm.log"
+
+# 配置参数
+CONDA_ENV="mineru2"
+PORT="8121"
+HOST="0.0.0.0"
+MODEL_PATH="/home/ubuntu/models/modelscope_cache/models/OpenDataLab/MinerU2___5-2509-1___2B"
+MODEL_NAME="MinerU2.5"
+
+# GPU 配置
+GPU_MEMORY_UTILIZATION="0.3"
+CUDA_VISIBLE_DEVICES="4"
+MAX_MODEL_LEN="16384"
+MAX_NUM_BATCHED_TOKENS="8192"
+MAX_NUM_SEQS="8"
+
+# MinerU 配置
+export MINERU_TOOLS_CONFIG_JSON="/home/ubuntu/zhch/MinerU/mineru.json"
+export MODELSCOPE_CACHE="/home/ubuntu/models/modelscope_cache"
+export USE_MODELSCOPE_HUB=1
+# export CUDA_VISIBLE_DEVICES="$CUDA_VISIBLE_DEVICES"
+# export NLTK_DATA="/home/ubuntu/nltk_data"
+# export HF_HOME="/home/ubuntu/models/hf_home"
+# export HF_ENDPOINT="https://hf-mirror.com"
+# export TORCH_HOME="/home/ubuntu/models/torch/"
+
+# 正确初始化和激活conda环境
+if [ -f "/home/ubuntu/anaconda3/etc/profile.d/conda.sh" ]; then
+    source /home/ubuntu/anaconda3/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+elif [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
+    source /opt/conda/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+else
+    # 方法2:直接使用conda可执行文件路径
+    echo "Warning: Using direct conda path activation"
+    export PATH="/home/ubuntu/anaconda3/envs/$CONDA_ENV/bin:$PATH"
+fi
+
+# 设置环境变量
+# export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"
+# export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
+
+start() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "MinerU vLLM is already running"
+        return 1
+    fi
+    
+    echo "Starting MinerU vLLM daemon..."
+    echo "Host: $HOST, Port: $PORT"
+    echo "Model path: $MODEL_PATH"
+    echo "GPU memory utilization: $GPU_MEMORY_UTILIZATION"
+    echo "CUDA devices: $CUDA_VISIBLE_DEVICES"
+    
+    # 检查模型文件是否存在
+    if [ ! -d "$MODEL_PATH" ]; then
+        echo "❌ Model path not found: $MODEL_PATH"
+        echo "Please download the model first:"
+        echo "python -m mineru.cli.models_download"
+        return 1
+    fi
+    
+    # 检查conda环境
+    if ! command -v python >/dev/null 2>&1; then
+        echo "❌ Python not found. Check conda environment activation."
+        return 1
+    fi
+    
+    # 检查mineru-vllm-server命令
+    if ! command -v mineru-vllm-server >/dev/null 2>&1; then
+        echo "❌ mineru-vllm-server not found. Check installation and environment."
+        return 1
+    fi
+    
+    echo "🔧 Using Python: $(which python)"
+    echo "🔧 Using mineru-vllm-server: $(which mineru-vllm-server)"
+    
+    # 显示GPU状态
+    echo "📊 GPU 状态检查:"
+    if command -v nvidia-smi >/dev/null 2>&1; then
+        nvidia-smi --query-gpu=index,name,memory.used,memory.total --format=csv,noheader,nounits | \
+        awk -F',' '{printf "  GPU %s: %s - 内存: %sMB/%sMB\n", $1, $2, $3, $4}'
+    else
+        echo "⚠️  nvidia-smi not available"
+    fi
+    
+    # 启动MinerU vLLM服务
+    CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES nohup mineru-vllm-server \
+            --host $HOST \
+            --port $PORT \
+            --gpu-memory-utilization $GPU_MEMORY_UTILIZATION \
+            --served-model-name $MODEL_NAME \
+    > $LOGFILE 2>&1 &
+    
+    echo $! > $PIDFILE
+    echo "✅ MinerU vLLM started with PID: $(cat $PIDFILE)"
+    echo "📋 Log file: $LOGFILE"
+    echo "🌐 Service URL: http://$HOST:$PORT"
+    echo "📖 API Documentation: http://localhost:$PORT/docs"
+    echo ""
+    echo "Waiting for service to start..."
+    sleep 5
+    status
+}
+
+stop() {
+    if [ ! -f $PIDFILE ]; then
+        echo "MinerU vLLM is not running"
+        return 1
+    fi
+    
+    PID=$(cat $PIDFILE)
+    echo "Stopping MinerU vLLM (PID: $PID)..."
+    
+    # 优雅停止
+    kill $PID
+    
+    # 等待进程结束
+    for i in {1..10}; do
+        if ! kill -0 $PID 2>/dev/null; then
+            break
+        fi
+        echo "Waiting for process to stop... ($i/10)"
+        sleep 1
+    done
+    
+    # 如果进程仍在运行,强制结束
+    if kill -0 $PID 2>/dev/null; then
+        echo "Force killing process..."
+        kill -9 $PID
+    fi
+    
+    rm -f $PIDFILE
+    echo "✅ MinerU vLLM stopped"
+}
+
+status() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        PID=$(cat $PIDFILE)
+        echo "✅ MinerU vLLM is running (PID: $PID)"
+        echo "🌐 Service URL: http://$HOST:$PORT"
+        echo "📋 Log file: $LOGFILE"
+        
+        # 检查端口是否被监听
+        if command -v ss >/dev/null 2>&1; then
+            if ss -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        elif command -v netstat >/dev/null 2>&1; then
+            if netstat -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        fi
+        
+        # 检查API响应
+        if command -v curl >/dev/null 2>&1; then
+            if curl -s --connect-timeout 2 http://127.0.0.1:$PORT/v1/models > /dev/null 2>&1; then
+                echo "🎯 API 响应正常"
+            else
+                echo "⚠️  API 无响应 (service may be starting up)"
+            fi
+        fi
+        
+        # 显示GPU使用情况
+        if command -v nvidia-smi >/dev/null 2>&1; then
+            echo "📊 GPU 使用情况:"
+            nvidia-smi --query-gpu=index,utilization.gpu,utilization.memory,memory.used,memory.total --format=csv,noheader,nounits | \
+            awk -F',' '{printf "  GPU %s: GPU利用率 %s%%, 内存利用率 %s%%, 显存 %sMB/%sMB\n", $1, $2, $3, $4, $5}'
+        fi
+        
+        # 显示最新日志
+        if [ -f $LOGFILE ]; then
+            echo "📄 Latest logs (last 3 lines):"
+            tail -3 $LOGFILE | sed 's/^/  /'
+        fi
+    else
+        echo "❌ MinerU vLLM is not running"
+        if [ -f $PIDFILE ]; then
+            echo "Removing stale PID file..."
+            rm -f $PIDFILE
+        fi
+    fi
+}
+
+logs() {
+    if [ -f $LOGFILE ]; then
+        echo "📄 MinerU vLLM logs:"
+        echo "=================="
+        tail -f $LOGFILE
+    else
+        echo "❌ Log file not found: $LOGFILE"
+    fi
+}
+
+config() {
+    echo "📋 Current configuration:"
+    echo "  Conda Environment: $CONDA_ENV"
+    echo "  Host: $HOST"
+    echo "  Port: $PORT"
+    echo "  Model Path: $MODEL_PATH"
+    echo "  Model Name: $MODEL_NAME"
+    echo "  GPU Memory Utilization: $GPU_MEMORY_UTILIZATION"
+    echo "  CUDA Visible Devices: $CUDA_VISIBLE_DEVICES"
+    echo "  Max Model Length: $MAX_MODEL_LEN"
+    echo "  Max Num Seqs: $MAX_NUM_SEQS"
+    echo "  PID File: $PIDFILE"
+    echo "  Log File: $LOGFILE"
+    echo ""
+    echo "  MinerU Config: $MINERU_TOOLS_CONFIG_JSON"
+    echo "  ModelScope Cache: $MODELSCOPE_CACHE"
+    
+    if [ -d "$MODEL_PATH" ]; then
+        echo "✅ Model path exists"
+        echo "  Model files:"
+        ls -la "$MODEL_PATH" | head -10 | sed 's/^/    /'
+        if [ $(ls -1 "$MODEL_PATH" | wc -l) -gt 10 ]; then
+            echo "    ... and more files"
+        fi
+    else
+        echo "❌ Model path not found"
+    fi
+    
+    # 检查MinerU配置文件
+    if [ -f "$MINERU_TOOLS_CONFIG_JSON" ]; then
+        echo "✅ MinerU config file exists"
+    else
+        echo "❌ MinerU config file not found: $MINERU_TOOLS_CONFIG_JSON"
+    fi
+    
+    # 显示环境信息
+    echo ""
+    echo "🔧 Environment:"
+    echo "  Python: $(which python 2>/dev/null || echo 'Not found')"
+    echo "  mineru-vllm-server: $(which mineru-vllm-server 2>/dev/null || echo 'Not found')"
+    echo "  Conda: $(which conda 2>/dev/null || echo 'Not found')"
+    echo "  CUDA: $(which nvcc 2>/dev/null || echo 'Not found')"
+    
+    # 显示GPU信息
+    if command -v nvidia-smi >/dev/null 2>&1; then
+        echo ""
+        echo "🔥 GPU Information:"
+        nvidia-smi --query-gpu=index,name,driver_version,memory.total --format=csv,noheader,nounits | \
+        awk -F',' '{printf "  GPU %s: %s (Driver: %s, Memory: %sMB)\n", $1, $2, $3, $4}'
+    fi
+}
+
+test_api() {
+    echo "🧪 Testing MinerU vLLM API..."
+    
+    if [ ! -f $PIDFILE ] || ! kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "❌ MinerU vLLM service is not running"
+        return 1
+    fi
+    
+    if ! command -v curl >/dev/null 2>&1; then
+        echo "❌ curl command not found"
+        return 1
+    fi
+    
+    echo "📡 Testing /v1/models endpoint..."
+    response=$(curl -s --connect-timeout 5 http://127.0.0.1:$PORT/v1/models)
+    if [ $? -eq 0 ]; then
+        echo "✅ Models endpoint accessible"
+        echo "$response" | python -m json.tool 2>/dev/null || echo "$response"
+    else
+        echo "❌ Models endpoint not accessible"
+    fi
+    
+    echo ""
+    echo "📡 Testing health endpoint..."
+    health_response=$(curl -s --connect-timeout 5 http://127.0.0.1:$PORT/health)
+    if [ $? -eq 0 ]; then
+        echo "✅ Health endpoint accessible"
+        echo "$health_response"
+    else
+        echo "❌ Health endpoint not accessible"
+    fi
+}
+
+test_client() {
+    echo "🧪 Testing MinerU client with vLLM server..."
+    
+    if [ ! -f $PIDFILE ] || ! kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "❌ MinerU vLLM service is not running. Start it first with: $0 start"
+        return 1
+    fi
+    
+    # 测试用例文件路径(需要根据实际情况调整)
+    TEST_IMAGE="/home/ubuntu/zhch/data/test/sample.png"
+    TEST_OUTPUT="/tmp/mineru_vllm_test_output"
+    
+    if [ ! -f "$TEST_IMAGE" ]; then
+        echo "⚠️  Test image not found: $TEST_IMAGE"
+        echo "Please provide a test image or update the TEST_IMAGE path in the script"
+        return 1
+    fi
+    
+    echo "📄 Testing with image: $TEST_IMAGE"
+    echo "📁 Output directory: $TEST_OUTPUT"
+    
+    # 使用HTTP客户端连接到vLLM服务器
+    python -m mineru.cli.client \
+        -p "$TEST_IMAGE" \
+        -o "$TEST_OUTPUT" \
+        --backend vlm-http-client \
+        --server-url "http://127.0.0.1:$PORT"
+    
+    if [ $? -eq 0 ]; then
+        echo "✅ Client test completed successfully"
+        echo "📁 Check output in: $TEST_OUTPUT"
+    else
+        echo "❌ Client test failed"
+    fi
+}
+
+# 显示使用帮助
+usage() {
+    echo "MinerU vLLM Service Daemon"
+    echo "=========================="
+    echo "Usage: $0 {start|stop|restart|status|logs|config|test|test-client}"
+    echo ""
+    echo "Commands:"
+    echo "  start       - Start the MinerU vLLM service"
+    echo "  stop        - Stop the MinerU vLLM service"
+    echo "  restart     - Restart the MinerU vLLM service"
+    echo "  status      - Show service status and resource usage"
+    echo "  logs        - Show service logs (follow mode)"
+    echo "  config      - Show current configuration"
+    echo "  test        - Test API endpoints"
+    echo "  test-client - Test MinerU client with vLLM server"
+    echo ""
+    echo "Configuration (edit script to modify):"
+    echo "  Host: $HOST"
+    echo "  Port: $PORT"
+    echo "  Model: $MODEL_PATH"
+    echo "  GPU Memory: $GPU_MEMORY_UTILIZATION"
+    echo "  CUDA Devices: $CUDA_VISIBLE_DEVICES"
+    echo ""
+    echo "Examples:"
+    echo "  ./mineru_vllm_daemon.sh start"
+    echo "  ./mineru_vllm_daemon.sh status"
+    echo "  ./mineru_vllm_daemon.sh logs"
+    echo "  ./mineru_vllm_daemon.sh test"
+    echo "  ./mineru_vllm_daemon.sh test-client"
+}
+
+case "$1" in
+    start)
+        start
+        ;;
+    stop)
+        stop
+        ;;
+    restart)
+        stop
+        sleep 3
+        start
+        ;;
+    status)
+        status
+        ;;
+    logs)
+        logs
+        ;;
+    config)
+        config
+        ;;
+    test)
+        test_api
+        ;;
+    test-client)
+        test_client
+        ;;
+    *)
+        usage
+        exit 1
+        ;;
+esac
+

+ 384 - 0
ocr_tools/daemons/paddle_vllm_daemon.sh

@@ -0,0 +1,384 @@
+#!/bin/bash
+# filepath: ocr_platform/ocr_tools/daemons/paddle_vllm_daemon.sh
+# 对应客户端工具: ocr_tools/paddle_vl_tool/main.py
+
+# PaddleOCR-VL vLLM 服务守护进程脚本
+
+LOGDIR="/home/ubuntu/zhch/logs"
+mkdir -p $LOGDIR
+PIDFILE="$LOGDIR/paddleocr_vl_vllm.pid"
+LOGFILE="$LOGDIR/paddleocr_vl_vllm.log"
+
+# 配置参数
+CONDA_ENV="paddle"  # 根据你的环境调整
+PORT="8110"
+HOST="0.0.0.0"
+MODEL_NAME="PaddleOCR-VL-0.9B"
+BACKEND="vllm"
+
+# GPU 配置
+GPU_MEMORY_UTILIZATION="0.3"
+CUDA_VISIBLE_DEVICES="3"  # 使用3号显卡
+MAX_MODEL_LEN="16384"
+MAX_NUM_BATCHED_TOKENS="8192"
+MAX_NUM_SEQS="8"
+
+# PaddleX 环境变量
+export PADDLE_PDX_MODEL_SOURCE="bos"
+export PYTHONWARNINGS="ignore::UserWarning"
+
+# 正确初始化和激活conda环境
+if [ -f "/home/ubuntu/anaconda3/etc/profile.d/conda.sh" ]; then
+    source /home/ubuntu/anaconda3/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+elif [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
+    source /opt/conda/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+else
+    echo "Warning: Using direct conda path activation"
+    export PATH="/home/ubuntu/anaconda3/envs/$CONDA_ENV/bin:$PATH"
+fi
+
+start() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "PaddleOCR-VL vLLM is already running"
+        return 1
+    fi
+    
+    echo "Starting PaddleOCR-VL vLLM daemon..."
+    echo "Host: $HOST, Port: $PORT"
+    echo "Model: $MODEL_NAME, Backend: $BACKEND"
+    echo "GPU memory utilization: $GPU_MEMORY_UTILIZATION"
+    echo "CUDA devices: $CUDA_VISIBLE_DEVICES"
+    
+    # 检查conda环境
+    if ! command -v python >/dev/null 2>&1; then
+        echo "❌ Python not found. Check conda environment activation."
+        return 1
+    fi
+    
+    # 检查paddlex_genai_server命令
+    if ! command -v paddlex_genai_server >/dev/null 2>&1; then
+        echo "❌ paddlex_genai_server not found. Please install vllm-server plugin:"
+        echo "   paddlex --install genai-vllm-server"
+        return 1
+    fi
+    
+    echo "🔧 Using Python: $(which python)"
+    echo "🔧 Using paddlex_genai_server: $(which paddlex_genai_server)"
+    
+    # 显示GPU状态
+    echo "📊 GPU 状态检查:"
+    if command -v nvidia-smi >/dev/null 2>&1; then
+        nvidia-smi --query-gpu=index,name,memory.used,memory.total --format=csv,noheader,nounits | \
+        grep "^$CUDA_VISIBLE_DEVICES," | \
+        awk -F',' '{printf "  GPU %s: %s - 内存: %sMB/%sMB\n", $1, $2, $3, $4}'
+    else
+        echo "⚠️  nvidia-smi not available"
+    fi
+    
+    # 启动PaddleOCR-VL vLLM服务
+    CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES nohup paddlex_genai_server \
+        --model_name $MODEL_NAME \
+        --backend $BACKEND \
+        --host $HOST \
+        --port $PORT \
+        --backend_config <(cat <<EOF
+gpu-memory-utilization: $GPU_MEMORY_UTILIZATION
+EOF
+) > $LOGFILE 2>&1 &
+    
+    echo $! > $PIDFILE
+    echo "✅ PaddleOCR-VL vLLM started with PID: $(cat $PIDFILE)"
+    echo "📋 Log file: $LOGFILE"
+    echo "🌐 Service URL: http://$HOST:$PORT"
+    echo "📖 API Documentation: http://localhost:$PORT/docs"
+    echo ""
+    echo "Waiting for service to start..."
+    sleep 5
+    status
+}
+
+stop() {
+    if [ ! -f $PIDFILE ]; then
+        echo "PaddleOCR-VL vLLM is not running"
+        return 1
+    fi
+    
+    PID=$(cat $PIDFILE)
+    echo "Stopping PaddleOCR-VL vLLM (PID: $PID)..."
+    
+    # 优雅停止
+    kill $PID
+    
+    # 等待进程结束
+    for i in {1..10}; do
+        if ! kill -0 $PID 2>/dev/null; then
+            break
+        fi
+        echo "Waiting for process to stop... ($i/10)"
+        sleep 1
+    done
+    
+    # 如果进程仍在运行,强制结束
+    if kill -0 $PID 2>/dev/null; then
+        echo "Force killing process..."
+        kill -9 $PID
+    fi
+    
+    rm -f $PIDFILE
+    echo "✅ PaddleOCR-VL vLLM stopped"
+}
+
+status() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        PID=$(cat $PIDFILE)
+        echo "✅ PaddleOCR-VL vLLM is running (PID: $PID)"
+        echo "🌐 Service URL: http://$HOST:$PORT"
+        echo "📋 Log file: $LOGFILE"
+        
+        # 检查端口是否被监听
+        if command -v ss >/dev/null 2>&1; then
+            if ss -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        elif command -v netstat >/dev/null 2>&1; then
+            if netstat -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        fi
+        
+        # 检查API响应
+        if command -v curl >/dev/null 2>&1; then
+            if curl -s --connect-timeout 2 http://127.0.0.1:$PORT/v1/models > /dev/null 2>&1; then
+                echo "🎯 API 响应正常"
+            else
+                echo "⚠️  API 无响应 (service may be starting up)"
+            fi
+        fi
+        
+        # 显示GPU使用情况
+        if command -v nvidia-smi >/dev/null 2>&1; then
+            echo "📊 GPU 使用情况:"
+            nvidia-smi --query-gpu=index,utilization.gpu,utilization.memory,memory.used,memory.total --format=csv,noheader,nounits | \
+            grep "^$CUDA_VISIBLE_DEVICES," | \
+            awk -F',' '{printf "  GPU %s: GPU利用率 %s%%, 内存利用率 %s%%, 显存 %sMB/%sMB\n", $1, $2, $3, $4, $5}'
+        fi
+        
+        # 显示最新日志
+        if [ -f $LOGFILE ]; then
+            echo "📄 Latest logs (last 3 lines):"
+            tail -3 $LOGFILE | sed 's/^/  /'
+        fi
+    else
+        echo "❌ PaddleOCR-VL vLLM is not running"
+        if [ -f $PIDFILE ]; then
+            echo "Removing stale PID file..."
+            rm -f $PIDFILE
+        fi
+    fi
+}
+
+logs() {
+    if [ -f $LOGFILE ]; then
+        echo "📄 PaddleOCR-VL vLLM logs:"
+        echo "=================="
+        tail -f $LOGFILE
+    else
+        echo "❌ Log file not found: $LOGFILE"
+    fi
+}
+
+config() {
+    echo "📋 Current configuration:"
+    echo "  Conda Environment: $CONDA_ENV"
+    echo "  Host: $HOST"
+    echo "  Port: $PORT"
+    echo "  Model Name: $MODEL_NAME"
+    echo "  Backend: $BACKEND"
+    echo "  GPU Memory Utilization: $GPU_MEMORY_UTILIZATION"
+    echo "  CUDA Visible Devices: $CUDA_VISIBLE_DEVICES"
+    echo "  Max Model Length: $MAX_MODEL_LEN"
+    echo "  Max Num Seqs: $MAX_NUM_SEQS"
+    echo "  PID File: $PIDFILE"
+    echo "  Log File: $LOGFILE"
+    echo ""
+    echo "  Model Source: ${PADDLE_PDX_MODEL_SOURCE:-default}"
+    
+    # 显示环境信息
+    echo ""
+    echo "🔧 Environment:"
+    echo "  Python: $(which python 2>/dev/null || echo 'Not found')"
+    echo "  paddlex_genai_server: $(which paddlex_genai_server 2>/dev/null || echo 'Not found')"
+    echo "  Conda: $(which conda 2>/dev/null || echo 'Not found')"
+    echo "  CUDA: $(which nvcc 2>/dev/null || echo 'Not found')"
+    
+    # 显示GPU信息
+    if command -v nvidia-smi >/dev/null 2>&1; then
+        echo ""
+        echo "🔥 GPU Information:"
+        nvidia-smi --query-gpu=index,name,driver_version,memory.total --format=csv,noheader,nounits | \
+        grep "^$CUDA_VISIBLE_DEVICES," | \
+        awk -F',' '{printf "  GPU %s: %s (Driver: %s, Memory: %sMB)\n", $1, $2, $3, $4}'
+    fi
+}
+
+test_api() {
+    echo "🧪 Testing PaddleOCR-VL vLLM API..."
+    
+    if [ ! -f $PIDFILE ] || ! kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "❌ PaddleOCR-VL vLLM service is not running"
+        return 1
+    fi
+    
+    if ! command -v curl >/dev/null 2>&1; then
+        echo "❌ curl command not found"
+        return 1
+    fi
+    
+    echo "📡 Testing /v1/models endpoint..."
+    response=$(curl -s --connect-timeout 5 http://127.0.0.1:$PORT/v1/models)
+    if [ $? -eq 0 ]; then
+        echo "✅ Models endpoint accessible"
+        echo "$response" | python -m json.tool 2>/dev/null || echo "$response"
+    else
+        echo "❌ Models endpoint not accessible"
+    fi
+    
+    echo ""
+    echo "📡 Testing health endpoint..."
+    health_response=$(curl -s --connect-timeout 5 http://127.0.0.1:$PORT/health)
+    if [ $? -eq 0 ]; then
+        echo "✅ Health endpoint accessible"
+        echo "$health_response"
+    else
+        echo "❌ Health endpoint not accessible"
+    fi
+}
+
+test_client() {
+    echo "🧪 Testing PaddleOCR-VL client with vLLM server..."
+    
+    if [ ! -f $PIDFILE ] || ! kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "❌ PaddleOCR-VL vLLM service is not running. Start it first with: $0 start"
+        return 1
+    fi
+    
+    # 测试用例文件路径
+    TEST_IMAGE="/home/ubuntu/zhch/data/至远彩色印刷工业有限公司/2023年度报告母公司.img/2023年度报告母公司_page_006.png"
+    TEST_OUTPUT="/tmp/paddleocr_vl_vllm_test_output"
+    PIPELINE_CONFIG="/home/ubuntu/zhch/PaddleX/zhch/my_config/PaddleOCR-VL-Client.yaml"
+    
+    if [ ! -f "$TEST_IMAGE" ]; then
+        echo "⚠️  Test image not found: $TEST_IMAGE"
+        echo "Please provide a test image or update the TEST_IMAGE path in the script"
+        return 1
+    fi
+    
+    if [ ! -f "$PIPELINE_CONFIG" ]; then
+        echo "⚠️  Pipeline config not found: $PIPELINE_CONFIG"
+        echo "Please update the PIPELINE_CONFIG path in the script"
+        return 1
+    fi
+    
+    echo "📄 Testing with image: $TEST_IMAGE"
+    echo "⚙️  Using pipeline config: $PIPELINE_CONFIG"
+    echo "📁 Output directory: $TEST_OUTPUT"
+    echo ""
+    
+    # 方法1: 使用 paddlex 命令行 (推荐)
+    echo "🔧 Using paddlex CLI..."
+    mkdir -p "$TEST_OUTPUT"
+    
+    paddlex --pipeline "$PIPELINE_CONFIG" \
+            --input "$TEST_IMAGE" \
+            --save_path "$TEST_OUTPUT" \
+            --use_doc_orientation_classify False \
+            --use_doc_unwarping False
+    
+    if [ $? -eq 0 ]; then
+        echo "✅ CLI test completed successfully"
+        echo "📁 Results saved to: $TEST_OUTPUT"
+        
+        # 显示生成的文件
+        if [ -d "$TEST_OUTPUT" ]; then
+            echo ""
+            echo "📂 Generated files:"
+            ls -lh "$TEST_OUTPUT" | tail -n +2 | awk '{print "  " $9 " (" $5 ")"}'
+        fi
+    else
+        echo "❌ CLI test failed"
+        return 1
+    fi
+    
+}
+
+# 显示使用帮助
+usage() {
+    echo "PaddleOCR-VL vLLM Service Daemon"
+    echo "================================="
+    echo "Usage: $0 {start|stop|restart|status|logs|config|test|test-client}"
+    echo ""
+    echo "Commands:"
+    echo "  start       - Start the PaddleOCR-VL vLLM service"
+    echo "  stop        - Stop the PaddleOCR-VL vLLM service"
+    echo "  restart     - Restart the PaddleOCR-VL vLLM service"
+    echo "  status      - Show service status and resource usage"
+    echo "  logs        - Show service logs (follow mode)"
+    echo "  config      - Show current configuration"
+    echo "  test        - Test API endpoints"
+    echo "  test-client - Test PaddleX client with vLLM server"
+    echo ""
+    echo "Configuration (edit script to modify):"
+    echo "  Host: $HOST"
+    echo "  Port: $PORT"
+    echo "  Model: $MODEL_NAME"
+    echo "  Backend: $BACKEND"
+    echo "  GPU Memory: $GPU_MEMORY_UTILIZATION"
+    echo "  CUDA Devices: $CUDA_VISIBLE_DEVICES"
+    echo ""
+    echo "Examples:"
+    echo "  ./paddle_vllm_daemon.sh start"
+    echo "  ./paddle_vllm_daemon.sh status"
+    echo "  ./paddle_vllm_daemon.sh logs"
+    echo "  ./paddle_vllm_daemon.sh test"
+    echo "  ./paddle_vllm_daemon.sh test-client"
+}
+
+case "$1" in
+    start)
+        start
+        ;;
+    stop)
+        stop
+        ;;
+    restart)
+        stop
+        sleep 3
+        start
+        ;;
+    status)
+        status
+        ;;
+    logs)
+        logs
+        ;;
+    config)
+        config
+        ;;
+    test)
+        test_api
+        ;;
+    test-client)
+        test_client
+        ;;
+    *)
+        usage
+        exit 1
+        ;;
+esac
+

+ 281 - 0
ocr_tools/daemons/ppstructure_v3_daemon.sh

@@ -0,0 +1,281 @@
+#!/bin/bash
+# filepath: ocr_platform/ocr_tools/daemons/ppstructure_v3_daemon.sh
+# 对应客户端工具: ocr_tools/ppstructure_tool/api_client.py
+
+# PaddleX PP-StructureV3 服务守护进程脚本
+
+LOGDIR="/home/ubuntu/zhch/logs"
+mkdir -p $LOGDIR
+PIDFILE="$LOGDIR/ppstructurev3.pid"
+LOGFILE="$LOGDIR/ppstructurev3.log"
+
+# 配置参数
+CONDA_ENV="paddle"  # 根据您的study-notes.md中的环境名
+PORT="8111"
+CUDA_VISIBLE_DEVICES=7
+
+# 脚本和配置路径
+# 方式1:使用迁移后的路径(推荐)
+PADDLE_COMMON_DIR="/path/to/ocr_platform/ocr_tools/paddle_common"
+# 方式2:使用原始路径(如果尚未迁移)
+# PADDLE_COMMON_DIR="/home/ubuntu/zhch/PaddleX/zhch"
+
+# Pipeline 配置文件路径
+# 可以使用 ocr_tools/paddle_common/config/ 中的配置文件(推荐)
+PIPELINE_CONFIG="$PADDLE_COMMON_DIR/config/PP-StructureV3.yaml"
+# 或者使用原始配置路径(如果尚未迁移):
+# PIPELINE_CONFIG="/home/ubuntu/zhch/PaddleX/zhch/my_config/PP-StructureV3.yaml"
+# PIPELINE_CONFIG="$PADDLE_COMMON_DIR/config/PP-StructureV3-RT-DETR-H_layout_17cls.yaml"
+
+# start_paddlex_with_adapter.py 脚本路径
+START_SCRIPT="$PADDLE_COMMON_DIR/start_paddlex_with_adapter.py"
+
+# 正确初始化和激活conda环境
+# 方法1:使用conda.sh脚本初始化
+if [ -f "/home/ubuntu/anaconda3/etc/profile.d/conda.sh" ]; then
+    source /home/ubuntu/anaconda3/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+elif [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
+    source /opt/conda/etc/profile.d/conda.sh
+    conda activate $CONDA_ENV
+else
+    # 方法2:直接使用conda可执行文件路径
+    echo "Warning: Using direct conda path activation"
+    export PATH="/home/ubuntu/anaconda3/envs/$CONDA_ENV/bin:$PATH"
+fi
+
+# 设置模型下载源(可选)
+export PADDLE_PDX_MODEL_SOURCE="bos"
+export PADDLEX_ENABLE_ADAPTER="true"
+export PYTHONWARNINGS="ignore::UserWarning"
+
+start() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        echo "PaddleX PP-StructureV3 is already running"
+        return 1
+    fi
+    
+    echo "Starting PaddleX PP-StructureV3 daemon..."
+    echo "Port: $PORT"
+    echo "CUDA Devices: $CUDA_VISIBLE_DEVICES"
+    echo "Pipeline config: $PIPELINE_CONFIG"
+    
+    # 检查配置文件是否存在
+    if [ ! -f "$PIPELINE_CONFIG" ]; then
+        echo "❌ Pipeline config file not found: $PIPELINE_CONFIG"
+        return 1
+    fi
+    
+    # 检查conda环境
+    if ! command -v python >/dev/null 2>&1; then
+        echo "❌ Python not found. Check conda environment activation."
+        return 1
+    fi
+    
+    # 检查paddlex命令
+    if ! command -v paddlex >/dev/null 2>&1; then
+        echo "❌ PaddleX not found. Check installation and environment."
+        return 1
+    fi
+    
+    echo "🔧 Using Python: $(which python)"
+    echo "🔧 Using PaddleX: $(which paddlex)"
+    echo "  CUDA Devices: $CUDA_VISIBLE_DEVICES"
+    
+    # 检查 start_paddlex_with_adapter.py 是否存在
+    if [ ! -f "$START_SCRIPT" ]; then
+        echo "❌ start_paddlex_with_adapter.py not found: $START_SCRIPT"
+        echo "Please update PADDLE_COMMON_DIR in the script to point to the correct location"
+        return 1
+    fi
+    
+    # 启动PaddleX服务
+    # 切换到脚本所在目录,确保相对路径正确
+    cd "$(dirname "$START_SCRIPT")" || exit 1
+    CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES nohup python3 "$START_SCRIPT" --serve \
+        --port $PORT \
+        --device "gpu" \
+        --pipeline "$PIPELINE_CONFIG" \
+        > $LOGFILE 2>&1 &
+    
+    echo $! > $PIDFILE
+    echo "✅ PaddleX PP-StructureV3 started with PID: $(cat $PIDFILE)"
+    echo "📋 Log file: $LOGFILE"
+    echo "🌐 Service URL: http://localhost:$PORT"
+    echo "🌐 API Endpoint: http://localhost:$PORT/layout-parsing"
+    echo "📖 API Documentation: http://localhost:$PORT/docs"
+}
+
+stop() {
+    if [ ! -f $PIDFILE ]; then
+        echo "PaddleX PP-StructureV3 is not running"
+        return 1
+    fi
+    
+    PID=$(cat $PIDFILE)
+    echo "Stopping PaddleX PP-StructureV3 (PID: $PID)..."
+    
+    # 优雅停止
+    kill $PID
+    
+    # 等待进程结束
+    for i in {1..10}; do
+        if ! kill -0 $PID 2>/dev/null; then
+            break
+        fi
+        echo "Waiting for process to stop... ($i/10)"
+        sleep 1
+    done
+    
+    # 如果进程仍在运行,强制结束
+    if kill -0 $PID 2>/dev/null; then
+        echo "Force killing process..."
+        kill -9 $PID
+    fi
+    
+    rm -f $PIDFILE
+    echo "✅ PaddleX PP-StructureV3 stopped"
+}
+
+status() {
+    if [ -f $PIDFILE ] && kill -0 $(cat $PIDFILE) 2>/dev/null; then
+        PID=$(cat $PIDFILE)
+        echo "✅ PaddleX PP-StructureV3 is running (PID: $PID)"
+        echo "🌐 Service URL: http://localhost:$PORT"
+        echo "🌐 API Endpoint: http://localhost:$PORT/layout-parsing"
+        echo "📋 Log file: $LOGFILE"
+        
+        # 检查端口是否被监听
+        if command -v ss >/dev/null 2>&1; then
+            if ss -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        elif command -v netstat >/dev/null 2>&1; then
+            if netstat -tuln | grep -q ":$PORT "; then
+                echo "🔗 Port $PORT is being listened"
+            else
+                echo "⚠️  Port $PORT is not being listened (service may be starting up)"
+            fi
+        fi
+        
+        # 检查API响应
+        if command -v curl >/dev/null 2>&1; then
+            if curl -s --connect-timeout 2 http://127.0.0.1:$PORT/layout-parsing > /dev/null 2>&1; then
+                echo "🎯 API 响应正常"
+            else
+                echo "⚠️  API 无响应 (service may be starting up)"
+            fi
+        fi
+        
+        # 显示最新日志
+        if [ -f $LOGFILE ]; then
+            echo "📄 Latest logs (last 5 lines):"
+            tail -5 $LOGFILE | sed 's/^/  /'
+        fi
+    else
+        echo "❌ PaddleX PP-StructureV3 is not running"
+        if [ -f $PIDFILE ]; then
+            echo "Removing stale PID file..."
+            rm -f $PIDFILE
+        fi
+    fi
+}
+
+logs() {
+    if [ -f $LOGFILE ]; then
+        echo "📄 PaddleX PP-StructureV3 logs:"
+        echo "=================="
+        tail -f $LOGFILE
+    else
+        echo "❌ Log file not found: $LOGFILE"
+    fi
+}
+
+config() {
+    echo "📋 Current configuration:"
+    echo "  Conda Environment: $CONDA_ENV"
+    echo "  Port: $PORT"
+    echo "  CUDA Visible Devices: $CUDA_VISIBLE_DEVICES"
+    echo "  Pipeline Config: $PIPELINE_CONFIG"
+    echo "  Paddle Common Directory: $PADDLE_COMMON_DIR"
+    echo "  Start Script: $START_SCRIPT"
+    echo "  Model Source: ${PADDLE_PDX_MODEL_SOURCE:-default}"
+    echo "  PID File: $PIDFILE"
+    echo "  Log File: $LOGFILE"
+    
+    if [ -f "$PIPELINE_CONFIG" ]; then
+        echo "✅ Pipeline config file exists"
+    else
+        echo "❌ Pipeline config file not found: $PIPELINE_CONFIG"
+    fi
+    
+    # 检查 start_paddlex_with_adapter.py 是否存在
+    if [ -f "$START_SCRIPT" ]; then
+        echo "✅ start_paddlex_with_adapter.py exists"
+    else
+        echo "❌ start_paddlex_with_adapter.py not found: $START_SCRIPT"
+        echo "   Please update PADDLE_COMMON_DIR in the script"
+    fi
+    
+    # 显示环境信息
+    echo ""
+    echo "🔧 Environment:"
+    echo "  Python: $(which python 2>/dev/null || echo 'Not found')"
+    echo "  PaddleX: $(which paddlex 2>/dev/null || echo 'Not found')"
+    echo "  Conda: $(which conda 2>/dev/null || echo 'Not found')"
+}
+
+# 显示使用帮助
+usage() {
+    echo "PaddleX PP-StructureV3 Service Daemon"
+    echo "======================================"
+    echo "Usage: $0 {start|stop|restart|status|logs|config}"
+    echo ""
+    echo "Commands:"
+    echo "  start   - Start the PaddleX PP-StructureV3 service"
+    echo "  stop    - Stop the PaddleX PP-StructureV3 service"
+    echo "  restart - Restart the PaddleX PP-StructureV3 service"
+    echo "  status  - Show service status"
+    echo "  logs    - Show service logs (follow mode)"
+    echo "  config  - Show current configuration"
+    echo ""
+    echo "Configuration (edit script to modify):"
+    echo "  Port: $PORT"
+    echo "  CUDA Devices: $CUDA_VISIBLE_DEVICES"
+    echo "  Pipeline: $PIPELINE_CONFIG"
+    echo ""
+    echo "Examples:"
+    echo "  ./ppstructure_v3_daemon.sh start"
+    echo "  ./ppstructure_v3_daemon.sh status"
+    echo "  ./ppstructure_v3_daemon.sh logs"
+}
+
+case "$1" in
+    start)
+        start
+        ;;
+    stop)
+        stop
+        ;;
+    restart)
+        stop
+        sleep 3
+        start
+        ;;
+    status)
+        status
+        ;;
+    logs)
+        logs
+        ;;
+    config)
+        config
+        ;;
+    *)
+        usage
+        exit 1
+        ;;
+esac
+