Browse Source

添加 MinerU vLLM 批量处理工具的 README、主程序和处理器实现

zhch158_admin 2 tuần trước cách đây
mục cha
commit
688a54e6f3

+ 207 - 0
ocr_tools/mineru_vl_tool/README.md

@@ -0,0 +1,207 @@
+# MinerU vLLM 批量处理工具
+
+基于 MinerU demo.py 框架的批量文档处理工具,支持 PDF 和图片文件的批量处理。
+
+## 功能特性
+
+- ✅ 统一输入接口:支持 PDF 文件、图片文件、图片目录、文件列表(.txt)、CSV 文件
+- ✅ 自动判断输入类型:根据输入路径自动识别文件类型并处理
+- ✅ 页面范围支持:PDF 文件和图片目录支持指定页面范围(如 `1-5,7,9-12`)
+- ✅ 成功判断优化:基于输出文件存在性判断处理是否成功
+- ✅ 数字标准化:自动将全角数字转换为半角(可选)
+- ✅ Dry run 模式:验证配置和输入,不执行实际处理
+- ✅ 调试模式:保存中间结果(middle.json, model.json)
+- ✅ 进度显示:实时显示处理进度和统计信息
+
+## 安装依赖
+
+```bash
+# 安装 MinerU
+pip install mineru
+
+# 安装其他依赖
+pip install loguru tqdm pypdfium2
+```
+
+## 使用方法
+
+### 基本用法
+
+```bash
+# 处理单个PDF文件
+python main.py --input document.pdf --output_dir ./output
+
+# 处理图片目录
+python main.py --input ./images/ --output_dir ./output
+
+# 处理文件列表
+python main.py --input file_list.txt --output_dir ./output
+
+# 处理CSV文件(失败的文件)
+python main.py --input results.csv --output_dir ./output
+```
+
+### 高级用法
+
+```bash
+# 指定页面范围(PDF或图片目录)
+python main.py --input document.pdf --output_dir ./output --pages "1-5,7"
+
+# 只处理前10页(PDF或图片目录)
+python main.py --input document.pdf --output_dir ./output --pages "-10"
+
+# 从第5页到最后(PDF或图片目录)
+python main.py --input document.pdf --output_dir ./output --pages "5-"
+
+# 启用调试模式
+python main.py --input document.pdf --output_dir ./output --debug
+
+# 仅验证配置(dry run)
+python main.py --input document.pdf --output_dir ./output --dry_run
+
+# 指定服务器地址
+python main.py --input document.pdf --output_dir ./output --server_url http://10.192.72.11:20006
+
+# 调整批次大小
+python main.py --input ./images/ --output_dir ./output --batch_size 4
+
+# 禁用数字标准化
+python main.py --input document.pdf --output_dir ./output --no-normalize
+```
+
+## 参数说明
+
+### 输入输出参数
+
+- `--input, -i`: 输入路径(必需)
+  - PDF 文件:自动转换为图片处理
+  - 图片文件:直接处理
+  - 图片目录:扫描所有图片文件
+  - 文件列表(.txt):每行一个文件路径
+  - CSV 文件:读取失败的文件列表
+
+- `--output_dir, -o`: 输出目录(必需)
+
+### MinerU vLLM 参数
+
+- `--server_url`: MinerU vLLM 服务器地址(默认: `http://127.0.0.1:20006`)
+- `--timeout`: 请求超时时间,秒(默认: `300`)
+- `--pdf_dpi`: PDF 转图片的 DPI(默认: `200`)
+
+### 处理参数
+
+- `--batch_size`: 批次大小(默认: `1`)
+- `--pages, -p`: 页面范围(PDF和图片目录有效)
+  - 格式:`"1-5,7,9-12"`(第1-5页、第7页、第9-12页)
+  - `"1-"`:从第1页到最后
+  - `"-10"`:前10页(可用于测试,类似之前的 test_mode)
+- `--collect_results`: 收集处理结果到指定CSV文件
+
+### 功能开关
+
+- `--no-normalize`: 禁用数字标准化(默认启用)
+- `--debug`: 启用调试模式(保存中间结果)
+- `--dry_run`: 仅验证配置,不执行处理
+
+### 日志参数
+
+- `--log_level`: 日志级别(`DEBUG`, `INFO`, `WARNING`, `ERROR`,默认: `INFO`)
+- `--log_file`: 日志文件路径
+
+## 输出格式
+
+输出目录结构(兼容 MinerU demo.py):
+
+```
+output_dir/
+├── filename.md              # Markdown 内容
+├── filename.json            # Content list JSON
+├── filename_layout.pdf      # 布局边界框(调试用)
+├── filename_middle.json     # Middle JSON(调试模式)
+├── filename_model.json      # 模型输出(调试模式)
+└── images/                  # 提取的图片
+    └── filename.png
+```
+
+### 成功判断标准
+
+处理成功的判断标准:
+- 输出目录中存在对应的 `.md` 文件
+- 输出目录中存在对应的 `.json` 文件
+
+如果两个文件都存在,则认为处理成功。
+
+## 统计信息
+
+处理完成后会显示:
+
+- 文件统计:总文件数、成功数、失败数、跳过数
+- 内容提取:提取的块总数、各类型块数量
+- 性能指标:总耗时、吞吐量、平均处理时间
+
+结果会保存到 `{output_dir}_results.json` 文件中。
+
+## 示例
+
+### 示例1:处理PDF文件
+
+```bash
+python main.py \
+  --input /path/to/document.pdf \
+  --output_dir ./output \
+  --pages "1-10" \
+  --server_url http://10.192.72.11:20006 \
+  --debug
+```
+
+### 示例2:批量处理图片目录
+
+```bash
+python main.py \
+  --input /path/to/images/ \
+  --output_dir ./output \
+  --batch_size 4 \
+  --log_file ./processing.log
+```
+
+### 示例3:Dry run 验证
+
+```bash
+python main.py \
+  --input /path/to/document.pdf \
+  --output_dir ./output \
+  --dry_run
+```
+
+## 注意事项
+
+1. **服务器连接**:确保 MinerU vLLM 服务器正在运行并可访问
+2. **内存使用**:处理大文件时注意内存使用情况
+3. **文件命名**:PDF 页面会转换为 `filename_page_001.png` 格式
+4. **页面范围**:页面编号从 1 开始(不是 0)
+
+## 故障排查
+
+### 问题:连接服务器失败
+
+- 检查服务器地址是否正确
+- 确认服务器是否正在运行
+- 检查网络连接和防火墙设置
+
+### 问题:处理失败
+
+- 启用 `--debug` 模式查看详细错误信息
+- 检查输出目录权限
+- 查看日志文件获取更多信息
+
+### 问题:输出文件不存在
+
+- 检查处理是否真的失败(查看错误信息)
+- 确认输出目录路径正确
+- 检查磁盘空间是否充足
+
+## 相关工具
+
+- `ocr_utils`: OCR 工具包,提供 PDF 处理、文件处理等功能
+- MinerU: 文档解析框架
+

+ 10 - 0
ocr_tools/mineru_vl_tool/__init__.py

@@ -0,0 +1,10 @@
+"""
+MinerU vLLM 工具
+
+基于 MinerU demo.py 框架的批量文档处理工具
+支持 PDF 和图片文件的批量处理
+"""
+
+__version__ = "1.0.0"
+__author__ = "zhch158"
+

+ 463 - 0
ocr_tools/mineru_vl_tool/main.py

@@ -0,0 +1,463 @@
+#!/usr/bin/env python3
+"""
+批量处理图片/PDF文件并生成符合评测要求的预测结果(MinerU版本)
+
+根据 MinerU demo.py 框架调用方式:
+- 输入:支持 PDF 和各种图片格式(统一使用 --input 参数)
+- 输出:每个文件对应的 .md、.json 文件,所有图片保存为单独的图片文件
+- 调用方式:通过 vlm-http-client 连接到 MinerU vLLM 服务器
+
+使用方法:
+    python main.py --input document.pdf --output_dir ./output
+    python main.py --input ./images/ --output_dir ./output
+    python main.py --input file_list.txt --output_dir ./output
+    python main.py --input results.csv --output_dir ./output --dry_run
+"""
+
+import os
+import sys
+import json
+import time
+import traceback
+from pathlib import Path
+from typing import List, Dict, Any
+from tqdm import tqdm
+import argparse
+
+from loguru import logger
+
+# 导入 MinerU 核心组件
+from mineru.cli.common import read_fn, convert_pdf_bytes_to_bytes_by_pypdfium2
+
+# 导入 ocr_utils
+ocr_platform_root = Path(__file__).parents[2]
+if str(ocr_platform_root) not in sys.path:
+    sys.path.insert(0, str(ocr_platform_root))
+
+from ocr_utils import (
+    get_input_files,
+    collect_pid_files,
+    PDFUtils,
+    setup_logging
+)
+
+# 导入处理器
+try:
+    from .processor import MinerUVLLMProcessor
+except ImportError:
+    from processor import MinerUVLLMProcessor
+
+
+def process_images_single_process(
+    image_paths: List[str],
+    processor: MinerUVLLMProcessor,
+    batch_size: int = 1,
+    output_dir: str = "./output"
+) -> List[Dict[str, Any]]:
+    """
+    单进程版本的图像处理函数
+    
+    Args:
+        image_paths: 图像文件路径列表
+        processor: MinerU vLLM 处理器
+        batch_size: 批次大小
+        output_dir: 输出目录
+        
+    Returns:
+        处理结果列表
+    """
+    # 创建输出目录
+    output_path = Path(output_dir)
+    output_path.mkdir(parents=True, exist_ok=True)
+    
+    all_results = []
+    total_images = len(image_paths)
+    
+    logger.info(f"Processing {total_images} images with batch size {batch_size}")
+    
+    with tqdm(total=total_images, desc="Processing images", unit="img", 
+              bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}]') as pbar:
+        
+        for i in range(0, total_images, batch_size):
+            batch = image_paths[i:i + batch_size]
+            batch_start_time = time.time()
+            batch_results = []
+            
+            try:
+                for image_path in batch:
+                    try:
+                        result = processor.process_single_image(image_path, output_dir)
+                        batch_results.append(result)
+                    except Exception as e:
+                        logger.error(f"Error processing {image_path}: {e}")
+                        batch_results.append({
+                            "image_path": image_path,
+                            "processing_time": 0,
+                            "success": False,
+                            "server": processor.server_url,
+                            "error": str(e)
+                        })
+                
+                batch_processing_time = time.time() - batch_start_time
+                all_results.extend(batch_results)
+                
+                # 更新进度条
+                success_count = sum(1 for r in batch_results if r.get('success', False))
+                skipped_count = sum(1 for r in batch_results if r.get('skipped', False))
+                total_success = sum(1 for r in all_results if r.get('success', False))
+                total_skipped = sum(1 for r in all_results if r.get('skipped', False))
+                avg_time = batch_processing_time / len(batch) if len(batch) > 0 else 0
+                
+                total_blocks = sum(r.get('extraction_stats', {}).get('total_blocks', 0) for r in batch_results)
+                
+                pbar.update(len(batch))
+                pbar.set_postfix({
+                    'batch_time': f"{batch_processing_time:.2f}s",
+                    'avg_time': f"{avg_time:.2f}s/img",
+                    'success': f"{total_success}/{len(all_results)}",
+                    'skipped': f"{total_skipped}",
+                    'blocks': f"{total_blocks}",
+                    'rate': f"{total_success/len(all_results)*100:.1f}%" if len(all_results) > 0 else "0%"
+                })
+                
+            except Exception as e:
+                logger.error(f"Error processing batch {[Path(p).name for p in batch]}: {e}")
+                error_results = []
+                for img_path in batch:
+                    error_results.append({
+                        "image_path": str(img_path),
+                        "processing_time": 0,
+                        "success": False,
+                        "server": processor.server_url,
+                        "error": str(e)
+                    })
+                all_results.extend(error_results)
+                pbar.update(len(batch))
+    
+    return all_results
+
+
+def main():
+    """主函数"""
+    parser = argparse.ArgumentParser(
+        description="MinerU vLLM Batch Processing (demo.py framework)",
+        formatter_class=argparse.RawDescriptionHelpFormatter,
+        epilog="""
+示例:
+  # 处理单个PDF文件
+  python main.py --input document.pdf --output_dir ./output
+  
+  # 处理图片目录
+  python main.py --input ./images/ --output_dir ./output
+  
+  # 处理文件列表
+  python main.py --input file_list.txt --output_dir ./output
+  
+  # 处理CSV文件(失败的文件)
+  python main.py --input results.csv --output_dir ./output
+  
+  # 指定页面范围(仅PDF)
+  python main.py --input document.pdf --output_dir ./output --pages "1-5,7"
+  
+  # 启用调试模式
+  python main.py --input document.pdf --output_dir ./output --debug
+  
+  # 仅验证配置(dry run)
+  python main.py --input document.pdf --output_dir ./output --dry_run
+        """
+    )
+    
+    # 输入参数(统一使用 --input)
+    parser.add_argument(
+        "--input", "-i",
+        required=True,
+        type=str,
+        help="输入路径(支持PDF文件、图片文件、图片目录、文件列表.txt、CSV文件)"
+    )
+    
+    # 输出参数
+    parser.add_argument(
+        "--output_dir", "-o",
+        type=str,
+        required=True,
+        help="输出目录"
+    )
+    
+    # MinerU vLLM 参数
+    parser.add_argument(
+        "--server_url",
+        type=str,
+        default="http://127.0.0.1:20006",
+        help="MinerU vLLM server URL"
+    )
+    parser.add_argument(
+        "--timeout",
+        type=int,
+        default=300,
+        help="Request timeout in seconds"
+    )
+    parser.add_argument(
+        "--pdf_dpi",
+        type=int,
+        default=200,
+        help="DPI for PDF to image conversion"
+    )
+    parser.add_argument(
+        '--no-normalize',
+        action='store_true',
+        help='禁用数字标准化'
+    )
+    parser.add_argument(
+        '--debug',
+        action='store_true',
+        help='启用调试模式'
+    )
+    
+    # 处理参数
+    parser.add_argument(
+        "--batch_size",
+        type=int,
+        default=1,
+        help="Batch size"
+    )
+    parser.add_argument(
+        "--pages", "-p",
+        type=str,
+        help="页面范围(PDF和图片目录有效),如: '1-5,7,9-12', '1-', '-10'"
+    )
+    parser.add_argument(
+        "--collect_results",
+        type=str,
+        help="收集处理结果到指定CSV文件"
+    )
+    
+    # 日志参数
+    parser.add_argument(
+        "--log_level",
+        default="INFO",
+        choices=["DEBUG", "INFO", "WARNING", "ERROR"],
+        help="日志级别(默认: INFO)"
+    )
+    parser.add_argument(
+        "--log_file",
+        type=str,
+        help="日志文件路径"
+    )
+    
+    # Dry run 参数
+    parser.add_argument(
+        "--dry_run",
+        action="store_true",
+        help="仅验证配置和输入,不执行实际处理"
+    )
+    
+    args = parser.parse_args()
+    
+    # 设置日志
+    setup_logging(args.log_level, args.log_file)
+    
+    try:
+        # 创建参数对象(用于 get_input_files)
+        class Args:
+            def __init__(self, input_path, output_dir, pdf_dpi):
+                self.input = input_path
+                self.output_dir = output_dir
+                self.pdf_dpi = pdf_dpi
+        
+        args_obj = Args(args.input, args.output_dir, args.pdf_dpi)
+        
+        # 获取并预处理输入文件(页面范围过滤已在 get_input_files 中处理)
+        logger.info("🔄 Preprocessing input files...")
+        if args.pages:
+            logger.info(f"📄 页面范围: {args.pages}")
+        image_files = get_input_files(args_obj, page_range=args.pages)
+        
+        if not image_files:
+            logger.error("❌ No input files found or processed")
+            return 1
+        
+        output_dir = Path(args.output_dir).resolve()
+        logger.info(f"📁 Output dir: {output_dir}")
+        logger.info(f"📊 Found {len(image_files)} image files to process")
+        
+        # Dry run 模式
+        if args.dry_run:
+            logger.info("🔍 Dry run mode: 仅验证配置,不执行处理")
+            logger.info(f"📋 配置信息:")
+            logger.info(f"  - 输入: {args.input}")
+            logger.info(f"  - 输出目录: {output_dir}")
+            logger.info(f"  - 服务器: {args.server_url}")
+            logger.info(f"  - 超时: {args.timeout}s")
+            logger.info(f"  - 批次大小: {args.batch_size}")
+            logger.info(f"  - PDF DPI: {args.pdf_dpi}")
+            logger.info(f"  - 数字标准化: {not args.no_normalize}")
+            logger.info(f"  - 调试模式: {args.debug}")
+            if args.pages:
+                logger.info(f"  - 页面范围: {args.pages}")
+            logger.info(f"📋 将要处理的文件 ({len(image_files)} 个):")
+            for i, img_file in enumerate(image_files[:20], 1):  # 只显示前20个
+                logger.info(f"  {i}. {img_file}")
+            if len(image_files) > 20:
+                logger.info(f"  ... 还有 {len(image_files) - 20} 个文件")
+            logger.info("✅ Dry run 完成:配置验证通过")
+            return 0
+        
+        logger.info(f"🌐 Using server: {args.server_url}")
+        logger.info(f"📦 Batch size: {args.batch_size}")
+        logger.info(f"⏱️ Timeout: {args.timeout}s")
+        
+        # 创建处理器
+        processor = MinerUVLLMProcessor(
+            server_url=args.server_url,
+            timeout=args.timeout,
+            normalize_numbers=not args.no_normalize,
+            debug=args.debug
+        )
+        
+        # 开始处理
+        start_time = time.time()
+        results = process_images_single_process(
+            image_files,
+            processor,
+            args.batch_size,
+            str(output_dir)
+        )
+        
+        total_time = time.time() - start_time
+        
+        # 统计结果
+        success_count = sum(1 for r in results if r.get('success', False))
+        skipped_count = sum(1 for r in results if r.get('skipped', False))
+        error_count = len(results) - success_count
+        pdf_page_count = sum(1 for r in results if r.get('is_pdf_page', False))
+        
+        # 统计提取的块信息
+        total_blocks = sum(r.get('extraction_stats', {}).get('total_blocks', 0) for r in results)
+        block_type_stats = {}
+        for result in results:
+            if 'extraction_stats' in result and 'block_types' in result['extraction_stats']:
+                for block_type, count in result['extraction_stats']['block_types'].items():
+                    block_type_stats[block_type] = block_type_stats.get(block_type, 0) + count
+        
+        print(f"\n" + "="*60)
+        print(f"✅ Processing completed!")
+        print(f"📊 Statistics:")
+        print(f"  Total files processed: {len(image_files)}")
+        print(f"  PDF pages processed: {pdf_page_count}")
+        print(f"  Regular images processed: {len(image_files) - pdf_page_count}")
+        print(f"  Successful: {success_count}")
+        print(f"  Skipped: {skipped_count}")
+        print(f"  Failed: {error_count}")
+        if len(image_files) > 0:
+            print(f"  Success rate: {success_count / len(image_files) * 100:.2f}%")
+        
+        print(f"📋 Content Extraction:")
+        print(f"  Total blocks extracted: {total_blocks}")
+        if block_type_stats:
+            print(f"  Block types:")
+            for block_type, count in sorted(block_type_stats.items()):
+                print(f"    {block_type}: {count}")
+        
+        print(f"⏱️ Performance:")
+        print(f"  Total time: {total_time:.2f} seconds")
+        if total_time > 0:
+            print(f"  Throughput: {len(image_files) / total_time:.2f} images/second")
+            print(f"  Avg time per image: {total_time / len(image_files):.2f} seconds")
+        
+        print(f"\n📁 Output Structure (demo.py compatible):")
+        print(f"  output_dir/")
+        print(f"  ├── filename.md              # Markdown content")
+        print(f"  ├── filename.json            # Content list")
+        print(f"  ├── filename_layout.pdf      # Debug: layout bbox")
+        print(f"  └── images/                  # Extracted images")
+        print(f"      └── filename.png")
+        if args.debug:
+            print(f"  ├── filename_middle.json    # Debug: middle JSON")
+            print(f"  └── filename_model.json     # Debug: model output")
+
+        # 保存结果统计
+        stats = {
+            "total_files": len(image_files),
+            "pdf_pages": pdf_page_count,
+            "regular_images": len(image_files) - pdf_page_count,
+            "success_count": success_count,
+            "skipped_count": skipped_count,
+            "error_count": error_count,
+            "success_rate": success_count / len(image_files) if len(image_files) > 0 else 0,
+            "total_time": total_time,
+            "throughput": len(image_files) / total_time if total_time > 0 else 0,
+            "avg_time_per_image": total_time / len(image_files) if len(image_files) > 0 else 0,
+            "batch_size": args.batch_size,
+            "server": args.server_url,
+            "backend": "vlm-http-client",
+            "timeout": args.timeout,
+            "pdf_dpi": args.pdf_dpi,
+            "total_blocks": total_blocks,
+            "block_type_stats": block_type_stats,
+            "normalization_enabled": not args.no_normalize,
+            "timestamp": time.strftime("%Y-%m-%d %H:%M:%S")
+        }
+        
+        # 保存最终结果
+        output_file_name = Path(output_dir).name
+        output_file = output_dir / f"{output_file_name}_results.json"
+        final_results = {
+            "stats": stats,
+            "results": results
+        }
+        
+        with open(output_file, 'w', encoding='utf-8') as f:
+            json.dump(final_results, f, ensure_ascii=False, indent=2)
+        
+        logger.info(f"💾 Results saved to: {output_file}")
+
+        # 收集处理结果
+        if not args.collect_results:
+            output_file_processed = output_dir / f"processed_files_{time.strftime('%Y%m%d_%H%M%S')}.csv"
+        else:
+            output_file_processed = Path(args.collect_results).resolve()
+            
+        processed_files = collect_pid_files(str(output_file))
+        with open(output_file_processed, 'w', encoding='utf-8') as f:
+            f.write("image_path,status\n")
+            for file_path, status in processed_files:
+                f.write(f"{file_path},{status}\n")
+        logger.info(f"💾 Processed files saved to: {output_file_processed}")
+
+        return 0
+        
+    except Exception as e:
+        logger.error(f"Processing failed: {e}")
+        traceback.print_exc()
+        return 1
+
+
+if __name__ == "__main__":
+    logger.info(f"🚀 启动MinerU vLLM统一PDF/图像处理程序...")
+    logger.info(f"🔧 CUDA_VISIBLE_DEVICES: {os.environ.get('CUDA_VISIBLE_DEVICES', 'Not set')}")
+    
+    if len(sys.argv) == 1:
+        # 如果没有命令行参数,使用默认配置运行
+        logger.info("ℹ️  No command line arguments provided. Running with default configuration...")
+        
+        # 默认配置
+        default_config = {
+            "input": "/Users/zhch158/workspace/data/流水分析/马公账流水_工商银行.pdf",
+            "output_dir": "./output",
+            "server_url": "http://10.192.72.11:20006",
+            "timeout": "300",
+            "batch_size": "1",
+            "pdf_dpi": "200",
+            "pages": "-1",
+        }
+        
+        # 构造参数
+        sys.argv = [sys.argv[0]]
+        for key, value in default_config.items():
+            sys.argv.extend([f"--{key}", str(value)])
+        
+        # 调试模式
+        sys.argv.append("--debug")
+    
+    sys.exit(main())
+

+ 377 - 0
ocr_tools/mineru_vl_tool/processor.py

@@ -0,0 +1,377 @@
+"""
+MinerU vLLM 处理器
+
+基于 MinerU demo.py 框架的文档处理类
+"""
+import os
+import json
+import time
+import traceback
+from pathlib import Path
+from typing import List, Dict, Any
+from loguru import logger
+
+# 导入 MinerU 核心组件
+from mineru.cli.common import read_fn, convert_pdf_bytes_to_bytes_by_pypdfium2
+from mineru.data.data_reader_writer import FileBasedDataWriter
+from mineru.utils.draw_bbox import draw_layout_bbox
+from mineru.utils.enum_class import MakeMode
+from mineru.backend.vlm.vlm_analyze import doc_analyze as vlm_doc_analyze
+from mineru.backend.vlm.vlm_middle_json_mkcontent import union_make as vlm_union_make
+
+# 导入 ocr_utils
+import sys
+ocr_platform_root = Path(__file__).parents[2]
+if str(ocr_platform_root) not in sys.path:
+    sys.path.insert(0, str(ocr_platform_root))
+
+from ocr_utils import normalize_markdown_table, normalize_json_table
+
+
+class MinerUVLLMProcessor:
+    """MinerU vLLM 处理器 (基于 demo.py 框架)"""
+    
+    def __init__(self, 
+                 server_url: str = "http://127.0.0.1:8121",
+                 timeout: int = 300,
+                 normalize_numbers: bool = False,
+                 debug: bool = False):
+        """
+        初始化处理器
+        
+        Args:
+            server_url: vLLM 服务器地址
+            timeout: 请求超时时间
+            normalize_numbers: 是否标准化数字
+            debug: 是否启用调试模式
+        """
+        self.server_url = server_url.rstrip('/')
+        self.timeout = timeout
+        self.normalize_numbers = normalize_numbers
+        self.debug = debug
+        self.backend = "http-client"  # 固定使用 http-client 后端
+        
+        logger.info(f"MinerU vLLM Processor 初始化完成:")
+        logger.info(f"  - 服务器: {server_url}")
+        logger.info(f"  - 后端: vlm-{self.backend}")
+        logger.info(f"  - 超时: {timeout}s")
+        logger.info(f"  - 数字标准化: {normalize_numbers}")
+        logger.info(f"  - 调试模式: {debug}")
+    
+    def do_parse_single_file(self, 
+                           input_file: str, 
+                           output_dir: str,
+                           start_page_id: int = 0,
+                           end_page_id: int | None = None) -> Dict[str, Any]:
+        """
+        解析单个文件 (参考 demo.py 的 do_parse 函数)
+        
+        Args:
+            input_file: 文件路径
+            output_dir: 输出目录
+            start_page_id: 起始页ID
+            end_page_id: 结束页ID
+            
+        Returns:
+            dict: 处理结果
+        """
+        try:
+            # 准备文件名和字节数据
+            file_path = Path(input_file)
+            pdf_file_name = file_path.stem
+            pdf_bytes = read_fn(str(file_path))
+            
+            # 转换PDF字节流 (如果需要)
+            if file_path.suffix.lower() == '.pdf':
+                pdf_bytes = convert_pdf_bytes_to_bytes_by_pypdfium2(
+                    pdf_bytes, start_page_id, end_page_id
+                )
+            
+            # 准备环境 (创建输出目录)
+            local_md_dir = Path(output_dir).resolve()
+            local_image_dir = local_md_dir / "images"
+            image_writer = FileBasedDataWriter(local_image_dir.as_posix())
+            md_writer = FileBasedDataWriter(local_md_dir.as_posix())
+            
+            # 使用 VLM 分析文档 (核心调用)
+            middle_json, model_output = vlm_doc_analyze(
+                pdf_bytes, 
+                image_writer=image_writer, 
+                backend=self.backend,
+                server_url=self.server_url
+            )
+            
+            pdf_info = middle_json["pdf_info"]
+            
+            # 处理输出
+            output_files = self._process_output(
+                pdf_info=pdf_info,
+                pdf_bytes=pdf_bytes,
+                pdf_file_name=pdf_file_name,
+                local_md_dir=local_md_dir,
+                local_image_dir=local_image_dir,
+                md_writer=md_writer,
+                middle_json=middle_json,
+                model_output=model_output,
+                original_file_path=str(file_path)
+            )
+            
+            # 统计提取信息
+            extraction_stats = self._get_extraction_stats(middle_json)
+            
+            return {
+                "success": True,
+                "pdf_info": pdf_info,
+                "middle_json": middle_json,
+                "model_output": model_output,
+                "output_files": output_files,
+                "extraction_stats": extraction_stats
+            }
+            
+        except Exception as e:
+            logger.error(f"Failed to process {file_path}: {e}")
+            if self.debug:
+                traceback.print_exc()
+            return {
+                "success": False,
+                "error": str(e)
+            }
+    
+    def _process_output(self,
+                       pdf_info,
+                       pdf_bytes,
+                       pdf_file_name,
+                       local_md_dir,
+                       local_image_dir,
+                       md_writer,
+                       middle_json,
+                       model_output,
+                       original_file_path: str) -> Dict[str, str]:
+        """
+        处理输出文件
+        
+        Args:
+            pdf_info: PDF信息
+            pdf_bytes: PDF字节数据
+            pdf_file_name: PDF文件名
+            local_md_dir: Markdown目录
+            local_image_dir: 图片目录
+            md_writer: Markdown写入器
+            middle_json: 中间JSON数据
+            model_output: 模型输出
+            original_file_path: 原始文件路径
+            
+        Returns:
+            dict: 保存的文件路径信息
+        """
+        saved_files = {}
+        
+        try:
+            # 设置相对图片目录名
+            image_dir = str(os.path.basename(local_image_dir))
+            
+            # 1. 生成并保存 Markdown 文件
+            md_content_str = vlm_union_make(pdf_info, MakeMode.MM_MD, image_dir)
+            
+            # 数字标准化处理
+            if self.normalize_numbers:
+                original_md = md_content_str
+                md_content_str = normalize_markdown_table(md_content_str)
+                
+                changes_count = len([1 for o, n in zip(original_md, md_content_str) if o != n])
+                if changes_count > 0:
+                    saved_files['md_normalized'] = f"✅ 已标准化 {changes_count} 个字符(全角→半角)"
+                else:
+                    saved_files['md_normalized'] = "ℹ️ 无需标准化(已是标准格式)"
+            
+            md_writer.write_string(f"{pdf_file_name}.md", md_content_str)
+            saved_files['md'] = os.path.join(local_md_dir, f"{pdf_file_name}.md")
+            
+            # 2. 生成并保存 content_list JSON 文件
+            content_list = vlm_union_make(pdf_info, MakeMode.CONTENT_LIST, image_dir)
+            content_list_str = json.dumps(content_list, ensure_ascii=False, indent=2)
+            md_writer.write_string(f"{pdf_file_name}_original.json", content_list_str)
+            
+            # 转换bbox坐标(从1000-based到像素坐标)
+            if pdf_info and len(pdf_info) > 0:
+                page_width, page_height = pdf_info[0].get('page_size', [1000, 1000])
+                for element in content_list:
+                    if "bbox" in element:
+                        x0, y0, x1, y1 = element["bbox"]
+                        element["bbox"] = [
+                            int(x0 / 1000 * page_width),
+                            int(y0 / 1000 * page_height),
+                            int(x1 / 1000 * page_width),
+                            int(y1 / 1000 * page_height),
+                        ]
+            content_list_str = json.dumps(content_list, ensure_ascii=False, indent=2)
+
+            # 数字标准化处理
+            if self.normalize_numbers:
+                original_json = content_list_str
+                content_list_str = normalize_json_table(content_list_str)
+                
+                changes_count = len([1 for o, n in zip(original_json, content_list_str) if o != n])
+                if changes_count > 0:
+                    saved_files['json_normalized'] = f"✅ 已标准化 {changes_count} 个字符(全角→半角)"
+                else:
+                    saved_files['json_normalized'] = "ℹ️ 无需标准化(已是标准格式)"
+            
+            md_writer.write_string(f"{pdf_file_name}.json", content_list_str)
+            saved_files['json'] = os.path.join(local_md_dir, f"{pdf_file_name}.json")
+            
+            # 绘制布局边界框
+            try:
+                draw_layout_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_layout.pdf")
+                saved_files['layout_pdf'] = os.path.join(local_md_dir, f"{pdf_file_name}_layout.pdf")
+            except Exception as e:
+                logger.warning(f"Failed to draw layout bbox: {e}")
+            
+            # 调试模式下保存额外信息
+            if self.debug:
+                # 保存 middle.json
+                middle_json_str = json.dumps(middle_json, ensure_ascii=False, indent=2)
+                if self.normalize_numbers:
+                    middle_json_str = normalize_json_table(middle_json_str)
+                
+                md_writer.write_string(f"{pdf_file_name}_middle.json", middle_json_str)
+                saved_files['middle_json'] = os.path.join(local_md_dir, f"{pdf_file_name}_middle.json")
+                
+                # 保存 model output
+                if model_output:
+                    model_output_str = json.dumps(model_output, ensure_ascii=False, indent=2)
+                    md_writer.write_string(f"{pdf_file_name}_model.json", model_output_str)
+                    saved_files['model_output'] = os.path.join(local_md_dir, f"{pdf_file_name}_model.json")
+            
+            logger.info(f"Output saved to: {local_md_dir}")
+            
+        except Exception as e:
+            logger.error(f"Error in _process_output: {e}")
+            if self.debug:
+                traceback.print_exc()
+        
+        return saved_files
+    
+    def _get_extraction_stats(self, middle_json: Dict) -> Dict[str, Any]:
+        """
+        获取提取统计信息
+        
+        Args:
+            middle_json: 中间JSON数据
+            
+        Returns:
+            dict: 统计信息
+        """
+        stats = {
+            "total_blocks": 0,
+            "block_types": {},
+            "total_pages": 0
+        }
+        
+        try:
+            pdf_info = middle_json.get("pdf_info", [])
+            if isinstance(pdf_info, list):
+                stats["total_pages"] = len(pdf_info)
+                
+                for page_info in pdf_info:
+                    para_blocks = page_info.get("para_blocks", [])
+                    stats["total_blocks"] += len(para_blocks)
+                    
+                    for block in para_blocks:
+                        block_type = block.get("type", "unknown")
+                        stats["block_types"][block_type] = stats["block_types"].get(block_type, 0) + 1
+                        
+        except Exception as e:
+            logger.warning(f"Failed to get extraction stats: {e}")
+        
+        return stats
+    
+    def process_single_image(self, image_path: str, output_dir: str) -> Dict[str, Any]:
+        """
+        处理单张图片
+        
+        Args:
+            image_path: 图片路径
+            output_dir: 输出目录
+            
+        Returns:
+            dict: 处理结果,包含 success 字段(基于输出文件存在性判断)
+        """
+        start_time = time.time()
+        image_path_obj = Path(image_path)
+        image_name = image_path_obj.stem
+        
+        # 判断是否为PDF页面(根据文件名模式)
+        is_pdf_page = "_page_" in image_path_obj.name
+        
+        # 根据输入类型生成预期的输出文件名
+        if is_pdf_page:
+            # PDF页面:文件名格式为 filename_page_001.png
+            # 输出文件名:filename_page_001.md 和 filename_page_001.json
+            expected_md_path = Path(output_dir) / f"{image_name}.md"
+            expected_json_path = Path(output_dir) / f"{image_name}.json"
+        else:
+            # 普通图片:输出文件名:filename.md 和 filename.json
+            expected_md_path = Path(output_dir) / f"{image_name}.md"
+            expected_json_path = Path(output_dir) / f"{image_name}.json"
+        
+        result_info = {
+            "image_path": image_path,
+            "processing_time": 0,
+            "success": False,
+            "server": self.server_url,
+            "error": None,
+            "output_files": {},
+            "is_pdf_page": is_pdf_page,
+            "extraction_stats": {}
+        }
+        
+        try:
+            # 检查输出文件是否已存在(成功判断标准)
+            if expected_md_path.exists() and expected_json_path.exists():
+                result_info.update({
+                    "success": True,
+                    "processing_time": 0,
+                    "output_files": {
+                        "md": str(expected_md_path),
+                        "json": str(expected_json_path)
+                    },
+                    "skipped": True
+                })
+                logger.info(f"✅ 文件已存在,跳过处理: {image_name}")
+                return result_info
+            
+            # 使用 do_parse_single_file 处理
+            parse_result = self.do_parse_single_file(image_path, output_dir)
+            
+            # 处理完成后,再次检查输出文件是否存在(成功判断标准)
+            if expected_md_path.exists() and expected_json_path.exists():
+                result_info.update({
+                    "success": True,
+                    "output_files": parse_result.get("output_files", {}),
+                    "extraction_stats": parse_result.get("extraction_stats", {})
+                })
+                logger.info(f"✅ 处理成功: {image_name}")
+            else:
+                # 文件不存在,标记为失败
+                missing_files = []
+                if not expected_md_path.exists():
+                    missing_files.append("md")
+                if not expected_json_path.exists():
+                    missing_files.append("json")
+                result_info["error"] = f"输出文件不存在: {', '.join(missing_files)}"
+                result_info["success"] = False
+                logger.error(f"❌ 处理失败: {image_name} - {result_info['error']}")
+            
+        except Exception as e:
+            result_info["error"] = str(e)
+            result_info["success"] = False
+            logger.error(f"Error processing {image_name}: {e}")
+            if self.debug:
+                traceback.print_exc()
+        
+        finally:
+            result_info["processing_time"] = time.time() - start_time
+        
+        return result_info
+