Bläddra i källkod

feat: add installation guides for extension modules and model sources

myhloli 4 månader sedan
förälder
incheckning
406b8ea95d

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 4 - 39
README_zh-CN.md


+ 66 - 4
docker/compose.yaml

@@ -1,10 +1,9 @@
-# Documentation:
-# https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands
 services:
-  mineru-sglang:
+  mineru-sglang-server:
     image: mineru-sglang:latest
-    container_name: mineru-sglang
+    container_name: mineru-sglang-server
     restart: always
+    profiles: ["sglang-server"]
     ports:
       - 30000:30000
     environment:
@@ -30,3 +29,66 @@ services:
             - driver: nvidia
               device_ids: ["0"]
               capabilities: [gpu]
+
+  mineru-api:
+    image: mineru-sglang:latest
+    container_name: mineru-api
+    restart: always
+    profiles: ["api"]
+    ports:
+      - 8000:8000
+    environment:
+      MINERU_MODEL_SOURCE: local
+    entrypoint: mineru-api
+    command:
+      --host 0.0.0.0
+      --port 8000
+      # parameters for sglang-engine
+      # --enable-torch-compile  # You can also enable torch.compile to accelerate inference speed by approximately 15%
+      # --dp-size 2  # If using multiple GPUs, increase throughput using sglang's multi-GPU parallel mode
+      # --tp-size 2  # If you have more than one GPU, you can expand available VRAM using tensor parallelism (TP) mode.
+      # --mem-fraction-static 0.5  # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
+    ulimits:
+      memlock: -1
+      stack: 67108864
+    ipc: host
+    deploy:
+      resources:
+        reservations:
+          devices:
+            - driver: nvidia
+              device_ids: [ "0" ]
+              capabilities: [ gpu ]
+
+  mineru-gradio:
+    image: mineru-sglang:latest
+    container_name: mineru-gradio
+    restart: always
+    profiles: ["gradio"]
+    ports:
+      - 7860:7860
+    environment:
+      MINERU_MODEL_SOURCE: local
+    entrypoint: mineru-gradio
+    command:
+      --server-name 0.0.0.0
+      --server-port 7860
+      --enable-sglang-engine true  # Enable the sglang engine for Gradio
+      # --enable-api false  # If you want to disable the API, set this to false
+      # --max-convert-pages 20  # If you want to limit the number of pages for conversion, set this to a specific number
+      # parameters for sglang-engine
+      # --enable-torch-compile  # You can also enable torch.compile to accelerate inference speed by approximately 15%
+      # --dp-size 2  # If using multiple GPUs, increase throughput using sglang's multi-GPU parallel mode
+      # --tp-size 2  # If you have more than one GPU, you can expand available VRAM using tensor parallelism (TP) mode.
+      # --mem-fraction-static 0.5  # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
+    ulimits:
+      memlock: -1
+      stack: 67108864
+    ipc: host
+    deploy:
+      resources:
+        reservations:
+          devices:
+            - driver: nvidia
+              device_ids: [ "0" ]
+              capabilities: [ gpu ]

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 2 - 2
docs/en/index.md


+ 68 - 0
docs/en/quick_start/docker_deployment.md

@@ -0,0 +1,68 @@
+# Deploying MinerU with Docker
+
+MinerU provides a convenient Docker deployment method, which helps quickly set up the environment and solve some tricky environment compatibility issues.
+
+## Build Docker Image using Dockerfile:
+
+```bash
+wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/Dockerfile
+docker build -t mineru-sglang:latest -f Dockerfile .
+```
+
+> [!TIP]
+> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile) uses `lmsysorg/sglang:v0.4.8.post1-cu126` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper platforms.
+> If you are using the newer `Blackwell` platform, please modify the base image to `lmsysorg/sglang:v0.4.8.post1-cu128-b200` before executing the build operation.
+
+## Docker Description
+
+MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sglang` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `sglang` to accelerate VLM model inference.
+
+> [!NOTE]
+> Requirements for using `sglang` to accelerate VLM model inference:
+> - Device must have Turing architecture or later graphics cards with 8GB+ available VRAM.
+> - The host machine's graphics driver should support CUDA 12.6 or higher; `Blackwell` platform should support CUDA 12.8 or higher. You can check the driver version using the `nvidia-smi` command.
+> - Docker container must have access to the host machine's graphics devices.
+>
+> If your device doesn't meet the above requirements, you can still use other features of MinerU, but cannot use `sglang` to accelerate VLM model inference, meaning you cannot use the `vlm-sglang-engine` backend or start the `vlm-sglang-server` service.
+
+## Start Docker Container:
+
+```bash
+docker run --gpus all \
+  --shm-size 32g \
+  -p 30000:30000 -p 7860:7860 -p 8000:8000 \
+  --ipc=host \
+  -it mineru-sglang:latest \
+  /bin/bash
+```
+
+After executing this command, you will enter the Docker container's interactive terminal with some ports mapped for potential services. You can directly run MinerU-related commands within the container to use MinerU's features.
+You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [MinerU Usage Documentation](../usage/index.md).
+
+## Start Services Directly with Docker Compose
+
+We provide a `compose.yml` file that you can use to quickly start MinerU services.
+
+```bash
+# Download compose.yaml file
+wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
+```
+
+- Start `sglang-server` service and connect to `sglang-server` via `vlm-sglang-client` backend:
+  ```bash
+  docker compose -f compose.yaml --profile mineru-sglang-server up -d
+  # In another terminal, connect to sglang server via sglang client (only requires CPU and network, no sglang environment needed)
+  mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
+  ```
+
+- Start API service:
+  ```bash
+  docker compose -f compose.yaml --profile mineru-api up -d
+  ```
+  Access `http://<server_ip>:8000/docs` in your browser to view the API documentation.
+
+- Start Gradio WebUI service:
+  ```bash
+  docker compose -f compose.yaml --profile mineru-gradio up -d
+  ```
+  Access `http://<server_ip>:7860` in your browser to use the Gradio WebUI or access `http://<server_ip>:7860/?view=api` to use the Gradio API.

+ 37 - 0
docs/en/quick_start/extension_modules.md

@@ -0,0 +1,37 @@
+# MinerU Extension Modules Installation Guide
+MinerU supports installing extension modules on demand based on different needs to enhance functionality or support specific model backends.
+
+## Common Scenarios
+
+### Core Functionality Installation
+The `core` module is the core dependency of MinerU, containing all functional modules except `sglang`. Installing this module ensures the basic functionality of MinerU works properly.
+```bash
+uv pip install mineru[core]
+```
+
+---
+
+### Using `sglang` to Accelerate VLM Model Inference
+The `sglang` module provides acceleration support for VLM model inference, suitable for graphics cards with Turing architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
+In the configuration, `all` includes both `core` and `sglang` modules, so `mineru[all]` and `mineru[core,sglang]` are equivalent.
+```bash
+uv pip install mineru[all]
+```
+> [!TIP]
+> If exceptions occur during installation of the complete package including sglang, please refer to the [sglang official documentation](https://docs.sglang.ai/start/install.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
+
+---
+
+### Installing Lightweight Client to Connect to sglang-server
+If you need to install a lightweight client on edge devices to connect to `sglang-server`, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
+```bash
+uv pip install mineru
+```
+
+---
+
+### Using Pipeline Backend on Outdated Linux Systems
+If your system is too outdated to meet the dependency requirements of `mineru[core]`, this option can minimally meet MinerU's runtime requirements, suitable for old systems that cannot be upgraded and only need to use the pipeline backend.
+```bash
+uv pip install mineru[pipeline_old_linux]
+```

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 3 - 2
docs/en/quick_start/index.md


+ 56 - 0
docs/en/usage/advanced_cli_parameters.md

@@ -0,0 +1,56 @@
+# Advanced Command Line Parameters
+
+## SGLang Acceleration Parameter Optimization
+
+### Memory Optimization Parameters
+> [!TIP]
+> SGLang acceleration mode currently supports running on Turing architecture graphics cards with a minimum of 8GB VRAM, but graphics cards with <24GB VRAM may encounter insufficient memory issues. You can optimize memory usage with the following parameters:
+> - If you encounter insufficient VRAM when using a single graphics card, you may need to reduce the KV cache size with `--mem-fraction-static 0.5`. If VRAM issues persist, try reducing it further to `0.4` or lower.
+> - If you have two or more graphics cards, you can try using tensor parallelism (TP) mode to simply expand available VRAM: `--tp-size 2`
+
+### Performance Optimization Parameters
+> [!TIP]
+> If you can already use SGLang normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
+> - If you have multiple graphics cards, you can use SGLang's multi-card parallel mode to increase throughput: `--dp-size 2`
+> - You can also enable `torch.compile` to accelerate inference speed by approximately 15%: `--enable-torch-compile`
+
+### Parameter Passing Instructions
+> [!TIP]
+> - If you want to learn more about `sglang` parameter usage, please refer to the [SGLang official documentation](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
+> - All officially supported SGLang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`
+
+## GPU Device Selection and Configuration
+
+### CUDA_VISIBLE_DEVICES Basic Usage
+> [!TIP]
+> - In any situation, you can specify visible GPU devices by adding the `CUDA_VISIBLE_DEVICES` environment variable at the beginning of the command line. For example:
+>   ```bash
+>   CUDA_VISIBLE_DEVICES=1 mineru -p <input_path> -o <output_path>
+>   ```
+> - This specification method is effective for all command line calls, including `mineru`, `mineru-sglang-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
+
+### Common Device Configuration Examples
+> [!TIP]
+> - Here are some common `CUDA_VISIBLE_DEVICES` setting examples:
+>   ```bash
+>   CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
+>   CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
+>   CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional
+>   CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
+>   CUDA_VISIBLE_DEVICES="" No GPU will be visible
+>   ```
+
+### Practical Application Scenarios
+> [!TIP]
+> Here are some possible usage scenarios:
+> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start 'sglang-server', you can use the following command:
+>   ```bash
+>   CUDA_VISIBLE_DEVICES=0,1 mineru-sglang-server --port 30000 --dp-size 2
+>   ```
+> - If you have multiple graphics cards and need to start two `fastapi` services on cards 0 and 1, listening on different ports respectively, you can use the following commands:
+>   ```bash
+>   # In terminal 1
+>   CUDA_VISIBLE_DEVICES=0 mineru-api --host 127.0.0.1 --port 8000
+>   # In terminal 2
+>   CUDA_VISIBLE_DEVICES=1 mineru-api --host 127.0.0.1 --port 8001
+>   ```

+ 71 - 0
docs/en/usage/cli_tools.md

@@ -0,0 +1,71 @@
+# Command Line Tools Usage Instructions
+
+## View Help Information
+To view help information for MinerU command line tools, you can use the `--help` parameter. Here are help information examples for various command line tools:
+```bash
+mineru --help
+Usage: mineru [OPTIONS]
+
+Options:
+  -v, --version                   Show version and exit
+  -p, --path PATH                 Input file path or directory (required)
+  -o, --output PATH               Output directory (required)
+  -m, --method [auto|txt|ocr]     Parsing method: auto (default), txt, ocr (pipeline backend only)
+  -b, --backend [pipeline|vlm-transformers|vlm-sglang-engine|vlm-sglang-client]
+                                  Parsing backend (default: pipeline)
+  -l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|latin|arabic|east_slavic|cyrillic|devanagari]
+                                  Specify document language (improves OCR accuracy, pipeline backend only)
+  -u, --url TEXT                  Service address when using sglang-client
+  -s, --start INTEGER             Starting page number for parsing (0-based)
+  -e, --end INTEGER               Ending page number for parsing (0-based)
+  -f, --formula BOOLEAN           Enable formula parsing (default: enabled)
+  -t, --table BOOLEAN             Enable table parsing (default: enabled)
+  -d, --device TEXT               Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline backend only)
+  --vram INTEGER                  Maximum GPU VRAM usage per process (GB) (pipeline backend only)
+  --source [huggingface|modelscope|local]
+                                  Model source, default: huggingface
+  --help                          Show help information
+```
+```bash
+mineru-api --help
+Usage: mineru-api [OPTIONS]
+
+Options:
+  --host TEXT     Server host (default: 127.0.0.1)
+  --port INTEGER  Server port (default: 8000)
+  --reload        Enable auto-reload (development mode)
+  --help          Show this message and exit.
+```
+```bash
+mineru-gradio --help
+Usage: mineru-gradio [OPTIONS]
+
+Options:
+  --enable-example BOOLEAN        Enable example files for input. The example
+                                  files to be input need to be placed in the
+                                  `example` folder within the directory where
+                                  the command is currently executed.
+  --enable-sglang-engine BOOLEAN  Enable SgLang engine backend for faster
+                                  processing.
+  --enable-api BOOLEAN            Enable gradio API for serving the
+                                  application.
+  --max-convert-pages INTEGER     Set the maximum number of pages to convert
+                                  from PDF to Markdown.
+  --server-name TEXT              Set the server name for the Gradio app.
+  --server-port INTEGER           Set the server port for the Gradio app.
+  --latex-delimiters-type [a|b|all]
+                                  Set the type of LaTeX delimiters to use in
+                                  Markdown rendering: 'a' for type '$', 'b' for
+                                  type '()[]', 'all' for both types.
+  --help                          Show this message and exit.
+```
+
+## Environment Variables Description
+
+Some parameters of MinerU command line tools have equivalent environment variable configurations. Generally, environment variable configurations have higher priority than command line parameters and take effect across all command line tools.
+- `MINERU_DEVICE_MODE`: Used to specify inference device, supports device types like `cpu/cuda/cuda:0/npu/mps`, only effective for `pipeline` backend.
+- `MINERU_VIRTUAL_VRAM_SIZE`: Used to specify maximum GPU VRAM usage per process (GB), only effective for `pipeline` backend.
+- `MINERU_MODEL_SOURCE`: Used to specify model source, supports `huggingface/modelscope/local`, defaults to `huggingface`, can be switched to `modelscope` or local models through environment variables.
+- `MINERU_TOOLS_CONFIG_JSON`: Used to specify configuration file path, defaults to `mineru.json` in user directory, can specify other configuration file paths through environment variables.
+- `MINERU_FORMULA_ENABLE`: Used to enable formula parsing, defaults to `true`, can be set to `false` through environment variables to disable formula parsing.
+- `MINERU_TABLE_ENABLE`: Used to enable table parsing, defaults to `true`, can be set to `false` through environment variables to disable table parsing.

+ 51 - 104
docs/en/usage/index.md

@@ -1,125 +1,72 @@
 # Using MinerU
 
-## Command Line Usage
-
-### Basic Usage
-
-The simplest command line invocation is:
-
-```bash
-mineru -p <input_path> -o <output_path>
-```
-
-- `<input_path>`: Local PDF/Image file or directory (supports pdf/png/jpg/jpeg/webp/gif)
-- `<output_path>`: Output directory
-
-### View Help Information
-
-Get all available parameter descriptions:
-
+## Quick Model Source Configuration
+MinerU uses `huggingface` as the default model source. If users cannot access `huggingface` due to network restrictions, they can conveniently switch the model source to `modelscope` through environment variables:
 ```bash
-mineru --help
-```
-
-### Parameter Details
-
-```text
-Usage: mineru [OPTIONS]
-
-Options:
-  -v, --version                   Show version and exit
-  -p, --path PATH                 Input file path or directory (required)
-  -o, --output PATH              Output directory (required)
-  -m, --method [auto|txt|ocr]     Parsing method: auto (default), txt, ocr (pipeline backend only)
-  -b, --backend [pipeline|vlm-transformers|vlm-sglang-engine|vlm-sglang-client]
-                                  Parsing backend (default: pipeline)
-  -l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|latin|arabic|east_slavic|cyrillic|devanagari]
-                                  Specify document language (improves OCR accuracy, pipeline backend only)
-  -u, --url TEXT                  Service address when using sglang-client
-  -s, --start INTEGER             Starting page number (0-based)
-  -e, --end INTEGER               Ending page number (0-based)
-  -f, --formula BOOLEAN           Enable formula parsing (default: on)
-  -t, --table BOOLEAN             Enable table parsing (default: on)
-  -d, --device TEXT               Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline backend only)
-  --vram INTEGER                  Maximum GPU VRAM usage per process (GB)(pipeline backend only)
-  --source [huggingface|modelscope|local]
-                                  Model source, default: huggingface
-  --help                          Show help information
+export MINERU_MODEL_SOURCE=modelscope
 ```
+For more information about model source configuration and custom local model paths, please refer to the [Model Source Documentation](./model_source.md) in the documentation.
 
 ---
 
-## Model Source Configuration
-
-MinerU automatically downloads required models from HuggingFace on first run. If HuggingFace is inaccessible, you can switch model sources:
-
-### Switch to ModelScope Source
-
-```bash
-mineru -p <input_path> -o <output_path> --source modelscope
-```
-
-Or set environment variable:
-
+## Quick Usage via Command Line
+MinerU has built-in command line tools that allow users to quickly use MinerU for PDF parsing through the command line:
 ```bash
-export MINERU_MODEL_SOURCE=modelscope
+# Default parsing using pipeline backend
 mineru -p <input_path> -o <output_path>
 ```
+- `<input_path>`: Local PDF/image file or directory
+- `<output_path>`: Output directory
 
-### Using Local Models
-
-#### 1. Download Models Locally
-
-```bash
-mineru-models-download --help
-```
-
-Or use interactive command-line tool to select models:
-
-```bash
-mineru-models-download
-```
-
-After download, model paths will be displayed in current terminal and automatically written to `mineru.json` in user directory.
+> [!NOTE]
+> The command line tool will automatically attempt cuda/mps acceleration on Linux and macOS systems. Windows users who need cuda acceleration should visit the [PyTorch official website](https://pytorch.org/get-started/locally/) to select the appropriate command for their cuda version to install acceleration-enabled `torch` and `torchvision`.
 
-#### 2. Parse Using Local Models
+> [!TIP]
+> For more information about output files, please refer to [Output File Documentation](./output_file.md).
 
 ```bash
-mineru -p <input_path> -o <output_path> --source local
+# Or specify vlm backend for parsing
+mineru -p <input_path> -o <output_path> -b vlm-transformers
 ```
+> [!TIP]
+> The vlm backend additionally supports `sglang` acceleration. Compared to the `transformers` backend, `sglang` can achieve 20-30x speedup. You can check the installation method for the complete package supporting `sglang` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
 
-Or enable via environment variable:
-
-```bash
-export MINERU_MODEL_SOURCE=local
-mineru -p <input_path> -o <output_path>
-```
+If you need to adjust parsing options through custom parameters, you can also check the more detailed [Command Line Tools Usage Instructions](./cli_tools.md) in the documentation.
 
 ---
 
-## Using sglang to Accelerate VLM Model Inference
-
-### Through the sglang-engine Mode
-
-```bash
-mineru -p <input_path> -o <output_path> -b vlm-sglang-engine
-```
-
-### Through the sglang-server/client Mode
-
-1. Start Server:
-
-```bash
-mineru-sglang-server --port 30000
-```
-
-2. Use Client in another terminal:
-
-```bash
-mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://127.0.0.1:30000
-```
-
+## Advanced Usage via API, WebUI, sglang-client/server
+
+- Direct Python API calls: [Python Usage Example](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
+- FastAPI calls:
+  ```bash
+  mineru-api --host 127.0.0.1 --port 8000
+  ```
+  Access http://127.0.0.1:8000/docs in your browser to view the API documentation.
+- Start Gradio WebUI visual frontend:
+  ```bash
+  # Using pipeline/vlm-transformers/vlm-sglang-client backends
+  mineru-gradio --server-name 127.0.0.1 --server-port 7860
+  # Or using vlm-sglang-engine/pipeline backends (requires sglang environment)
+  mineru-gradio --server-name 127.0.0.1 --server-port 7860 --enable-sglang-engine true
+  ```
+  Access http://127.0.0.1:7860 in your browser to use Gradio WebUI or access http://127.0.0.1:7860/?view=api to use the Gradio API.
+- Using `sglang-client/server` method:
+  ```bash
+  # Start sglang server (requires sglang environment)
+  mineru-sglang-server --port 30000
+  # In another terminal, connect to sglang server via sglang client (only requires CPU and network, no sglang environment needed)
+  mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://127.0.0.1:30000
+  ``` 
 > [!TIP]
-> For more information about output files, please refer to [Output File Documentation](../output_file.md)
+> All officially supported sglang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`.
+> We have compiled some commonly used parameters and usage methods for `sglang`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
+
+## Extending MinerU Functionality with Configuration Files
 
----
+- MinerU is now ready to use out of the box, but also supports extending functionality through configuration files. You can create a `mineru.json` file in your user directory to add custom configurations.
+- The `mineru.json` file will be automatically generated when you use the built-in model download command `mineru-models-download`, or you can create it by copying the [configuration template file](https://github.com/opendatalab/MinerU/blob/master/mineru.template.json) to your user directory and renaming it to `mineru.json`.
+- Here are some available configuration options:
+  - `latex-delimiter-config`: Used to configure LaTeX formula delimiters, defaults to `$` symbol, can be modified to other symbols or strings as needed.
+  - `llm-aided-config`: Used to configure parameters for LLM-assisted title hierarchy, compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen2.5-32b-instruct` model. You need to configure your own API key and set `enable` to `true` to enable this feature.
+  - `models-dir`: Used to specify local model storage directory, please specify model directories for `pipeline` and `vlm` backends separately. After specifying the directory, you can use local models by configuring the environment variable `export MINERU_MODEL_SOURCE=local`.

+ 54 - 0
docs/en/usage/model_source.md

@@ -0,0 +1,54 @@
+# Model Source Documentation
+
+MinerU uses `HuggingFace` and `ModelScope` as model repositories. Users can switch model sources or use local models as needed.
+
+- `HuggingFace` is the default model source, providing excellent loading speed and high stability globally.
+- `ModelScope` is the best choice for users in mainland China, providing seamlessly compatible `hf` SDK modules, suitable for users who cannot access HuggingFace.
+
+## Methods to Switch Model Sources
+
+### Switch via Command Line Parameters
+Currently, only the `mineru` command line tool supports switching model sources through command line parameters. Other command line tools such as `mineru-api`, `mineru-gradio`, etc., do not support this yet.
+```bash
+mineru -p <input_path> -o <output_path> --source modelscope
+```
+
+### Switch via Environment Variables
+You can switch model sources by setting environment variables in any situation. This applies to all command line tools and API calls.
+```bash
+export MINERU_MODEL_SOURCE=modelscope
+```
+or
+```python
+import os
+os.environ["MINERU_MODEL_SOURCE"] = "modelscope"
+```
+>[!TIP]
+> Model sources set through environment variables will take effect in the current terminal session until the terminal is closed or the environment variable is modified. They have higher priority than command line parameters - if both command line parameters and environment variables are set, the command line parameters will be ignored.
+
+## Using Local Models
+
+### 1. Download Models to Local Storage
+```bash
+mineru-models-download --help
+```
+or use the interactive command line tool to select model downloads:
+```bash
+mineru-models-download
+```
+>[!TIP]
+>- After download completion, the model path will be output in the current terminal window and automatically written to `mineru.json` in the user directory.
+>- After downloading models locally, you can freely move the model folder to other locations while updating the model path in `mineru.json`.
+>- If you deploy the model folder to another server, please ensure you move the `mineru.json` file to the user directory of the new device and configure the model path correctly.
+>- If you need to update model files, you can run the `mineru-models-download` command again. Model updates do not support custom paths currently - if you haven't moved the local model folder, model files will be incrementally updated; if you have moved the model folder, model files will be re-downloaded to the default location and `mineru.json` will be updated.
+
+### 2. Use Local Models for Parsing
+
+```bash
+mineru -p <input_path> -o <output_path> --source local
+```
+or enable through environment variables:
+```bash
+export MINERU_MODEL_SOURCE=local
+mineru -p <input_path> -o <output_path>
+```

+ 0 - 29
docs/mineru.template.json

@@ -1,29 +0,0 @@
-{
-    "bucket_info":{
-        "bucket-name-1":["ak", "sk", "endpoint"],
-        "bucket-name-2":["ak", "sk", "endpoint"]
-    },
-    "latex-delimiter-config": {
-        "display": {
-            "left": "$$",
-            "right": "$$"
-        },
-        "inline": {
-            "left": "$",
-            "right": "$"
-        }
-    },
-    "llm-aided-config": {
-        "title_aided": {
-            "api_key": "your_api_key",
-            "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
-            "model": "qwen2.5-32b-instruct",
-            "enable": false
-        }
-    },
-    "models-dir": {
-        "pipeline": "",
-        "vlm": ""
-    },
-    "config_version": "1.3.0"
-}

+ 20 - 2
docs/zh/FAQ/index.md

@@ -1,6 +1,10 @@
 # 常见问题解答
 
-## 1.在WSL2的Ubuntu22.04中遇到报错`ImportError: libGL.so.1: cannot open shared object file: No such file or directory`
+如果未能列出您的问题,您也可以使用[DeepWiki](https://deepwiki.com/opendatalab/MinerU)与AI助手交流,这可以解决大部分常见问题。
+
+如果您仍然无法解决问题,您可通过[Discord](https://discord.gg/Tdedn9GTXq)或[WeChat](http://mineru.space/s/V85Yl)加入社区,与其他用户和开发者交流。
+
+### 1. 在WSL2的Ubuntu22.04中遇到报错`ImportError: libGL.so.1: cannot open shared object file: No such file or directory`
 
 WSL2的Ubuntu22.04中缺少`libgl`库,可通过以下命令安装`libgl`库解决:
 
@@ -11,7 +15,7 @@ sudo apt-get install libgl1-mesa-glx
 参考:https://github.com/opendatalab/MinerU/issues/388
 
 
-## 2.在 CentOS 7 或 Ubuntu 18 系统安装MinerU时报错`ERROR: Failed building wheel for simsimd`
+### 2. 在 CentOS 7 或 Ubuntu 18 系统安装MinerU时报错`ERROR: Failed building wheel for simsimd`
 
 新版本albumentations(1.4.21)引入了依赖simsimd,由于simsimd在linux的预编译包要求glibc的版本大于等于2.28,导致部分2019年之前发布的Linux发行版无法正常安装,可通过如下命令安装:
 ```
@@ -21,3 +25,17 @@ pip install -U "mineru[pipeline_old_linux]"
 ```
 
 参考:https://github.com/opendatalab/MinerU/issues/1004
+
+### 3. 在 Linux 系统安装并使用时,解析结果缺失部份文字信息。
+
+MinerU在>=2.0的版本中使用`pypdfium2`代替`pymupdf`作为PDF页面的渲染引擎,以解决AGPLv3的许可证问题,在某些Linux发行版,由于缺少CJK字体,可能会在将PDF渲染成图片的过程中丢失部份文字。
+为了解决这个问题,您可以通过以下命令安装noto字体包,这在Ubuntu/debian系统中有效:
+```bash
+sudo apt update
+sudo apt install fonts-noto-core
+sudo apt install fonts-noto-cjk
+fc-cache -fv
+```
+也可以直接使用我们的[Docker部署](../quick_start/docker_deployment.md)方式构建镜像,镜像中默认包含以上字体包。
+
+参考:https://github.com/opendatalab/MinerU/issues/2915

+ 9 - 6
docs/zh/index.md

@@ -3,7 +3,7 @@
 <p align="center">
   <img src="../images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
 </p>
-</div>
+
 <!-- icon -->
 
 [![stars](https://img.shields.io/github/stars/opendatalab/MinerU.svg)](https://github.com/opendatalab/MinerU)
@@ -21,15 +21,12 @@
 [![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
 [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
 
-<div align="center" xmlns="http://www.w3.org/1999/html">
+
 <a href="https://trendshift.io/repositories/11174" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11174" alt="opendatalab%2FMinerU | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
 
 <!-- hot link -->
 
 <p align="center">
-<a href="https://github.com/opendatalab/PDF-Extract-Kit">PDF-Extract-Kit: 高质量PDF解析工具箱</a>🔥🔥🔥
-<br>
-<br>
 🚀<a href="https://mineru.net/?source=github">MinerU 官网入口→✅ 免装在线版 ✅ 全功能客户端 ✅ 开发者API在线调用,省去部署麻烦,多种产品形态一键get,速冲!</a>
 </p>
 
@@ -62,4 +59,10 @@ MinerU诞生于[书生-浦语](https://github.com/InternLM/InternLM)的预训练
 - 支持多种输出格式,如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
 - 支持多种可视化结果,包括layout可视化、span可视化等,便于高效确认输出效果与质检
 - 支持纯CPU环境运行,并支持 GPU(CUDA)/NPU(CANN)/MPS 加速
-- 兼容Windows、Linux和Mac平台
+- 兼容Windows、Linux和Mac平台
+
+
+## 使用指南
+
+- [快速上手指南](./quick_start/index.md)
+- [详细使用说明](./usage/index.md)

+ 0 - 10
docs/zh/known_issues.md

@@ -1,10 +0,0 @@
-# Known Issues
-
-- 阅读顺序基于模型对可阅读内容在空间中的分布进行排序,在极端复杂的排版下可能会部分区域乱序
-- 对竖排文字的支持较为有限
-- 目录和列表通过规则进行识别,少部分不常见的列表形式可能无法识别
-- 代码块在layout模型里还没有支持
-- 漫画书、艺术图册、小学教材、习题尚不能很好解析
-- 表格识别在复杂表格上可能会出现行/列识别错误
-- 在小语种PDF上,OCR识别可能会出现字符不准确的情况(如拉丁文的重音符号、阿拉伯文易混淆字符等)
-- 部分公式可能会无法在markdown中渲染

+ 64 - 0
docs/zh/quick_start/docker_deployment.md

@@ -0,0 +1,64 @@
+# 使用docker部署Mineru
+
+MinerU提供了便捷的docker部署方式,这有助于快速搭建环境并解决一些棘手的环境兼容问题。
+
+## 使用 Dockerfile 构建镜像:
+
+```bash
+wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/Dockerfile
+docker build -t mineru-sglang:latest -f Dockerfile .
+```
+
+> [!TIP]
+> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`lmsysorg/sglang:v0.4.8.post1-cu126`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper平台,
+> 如您使用较新的`Blackwell`平台,请将基础镜像修改为`lmsysorg/sglang:v0.4.8.post1-cu128-b200` 再执行build操作。
+
+## Docker说明
+
+Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中默认集成了`sglang`推理加速框架和必需的依赖环境。因此在满足条件的设备上,您可以直接使用`sglang`加速VLM模型推理。
+> [!NOTE]
+> 使用`sglang`加速VLM模型推理需要满足的条件是:
+> - 设备包含Turing及以后架构的显卡,且可用显存大于等于8G。
+> - 物理机的显卡驱动应支持CUDA 12.6或更高版本,`Blackwell`平台应支持CUDA 12.8及更高版本,可通过`nvidia-smi`命令检查驱动版本。
+> - docker中能够访问物理机的显卡设备。
+>
+> 如果您的设备不满足上述条件,您仍然可以使用MinerU的其他功能,但无法使用`sglang`加速VLM模型推理,即无法使用`vlm-sglang-engine`后端和启动`vlm-sglang-server`服务。
+
+## 启动 Docker 容器:
+
+```bash
+docker run --gpus all \
+  --shm-size 32g \
+  -p 30000:30000 -p 7860:7860 -p 8000:8000 \
+  --ipc=host \
+  -it mineru-sglang:latest \
+  /bin/bash
+```
+
+执行该命令后,您将进入到Docker容器的交互式终端,并映射了一些端口用于可能会使用的服务,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
+您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[MinerU使用文档](../usage/index_back.md)。
+
+## 通过 Docker Compose 直接启动服务
+
+我们提供了`compose.yml`文件,您可以通过它来快速启动MinerU服务。
+
+```bash
+# 下载 compose.yaml 文件
+wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
+```
+- 启动`sglang-server`服务,并通过`vlm-sglang-client`后端连接`sglang-server`:
+  ```bash
+  docker compose -f compose.yaml --profile mineru-sglang-server up -d
+  # 在另一个终端中通过sglang client连接sglang server(只需cpu与网络,不需要sglang环境)
+  mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
+  ```
+- 启动 API 服务:
+  ```bash
+  docker compose -f compose.yaml --profile mineru-api up -d
+  ```
+  在浏览器中访问 `http://<server_ip>:8000/docs` 查看API文档。
+- 启动 Gradio WebUI 服务:
+  ```bash
+  docker compose -f compose.yaml --profile mineru-gradio up -d
+  ```
+  在浏览器中访问 `http://<server_ip>:7860` 使用 Gradio WebUI 或访问 `http://<server_ip>:7860/?view=api` 使用 Gradio API。

+ 37 - 0
docs/zh/quick_start/extension_modules.md

@@ -0,0 +1,37 @@
+# MinerU 扩展模块安装指南
+MinerU 支持根据不同需求,按需安装扩展模块,以增强功能或支持特定的模型后端。
+
+## 常见场景
+
+### 核心功能安装
+`core` 模块是 MinerU 的核心依赖,包含了除`sglang`外的所有功能模块。安装此模块可以确保 MinerU 的基本功能正常运行。
+```bash
+uv pip install mineru[core]
+```
+
+---
+
+### 使用`sglang`加速 VLM 模型推理
+`sglang` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Turing 及以后架构的显卡(8G 显存及以上)。安装此模块可以显著提升模型推理速度。
+在配置中,`all`包含了`core`和`sglang`模块,因此`mineru[all]`和`mineru[core,sglang]`是等价的。
+```bash
+uv pip install mineru[all]
+```
+> [!TIP]
+> 如在安装包含sglang的完整包过程中发生异常,请参考 [sglang 官方文档](https://docs.sglang.ai/start/install.html) 尝试解决,或直接使用 [Docker](./docker_deployment.md) 方式部署镜像。
+
+---
+
+### 安装轻量版client连接sglang-server使用
+如果您需要在边缘设备上安装轻量版的 client 端以连接 `sglang-server`,可以安装mineru的基础包,非常轻量,适合在只有cpu和网络连接的设备上使用。
+```bash
+uv pip install mineru
+```
+
+---
+
+### 在过时的linux系统上使用pipeline后端
+如果您的系统过于陈旧,无法满足`mineru[core]`的依赖要求,该选项可以最低限度的满足 MinerU 的运行需求,适用于老旧系统无法升级且仅需使用 pipeline 后端的场景。
+```bash
+uv pip install mineru[pipeline_old_linux]
+```

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 2 - 1
docs/zh/quick_start/index.md


+ 0 - 72
docs/zh/quick_start/local_deployment.md

@@ -1,72 +0,0 @@
-# 本地部署
-
-## 安装 MinerU
-
-### 使用 pip 或 uv 安装
-
-```bash
-pip install --upgrade pip -i https://mirrors.aliyun.com/pypi/simple
-pip install uv -i https://mirrors.aliyun.com/pypi/simple
-uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple 
-```
-
-### 源码安装
-
-```bash
-git clone https://github.com/opendatalab/MinerU.git
-cd MinerU
-uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple
-```
-
-> [!NOTE]
-> Linux和macOS系统安装后自动支持cuda/mps加速,Windows用户如需使用cuda加速,
-> 请前往 [Pytorch官网](https://pytorch.org/get-started/locally/) 选择合适的cuda版本安装pytorch。
-
-### 安装完整版(支持 sglang 加速)(需确保设备有Turing及以后架构,8G显存及以上显卡)
-
-如需使用 **sglang 加速 VLM 模型推理**,请选择合适的方式安装完整版本:
-
-- 使用uv或pip安装
-  ```bash
-  uv pip install -U "mineru[all]" -i https://mirrors.aliyun.com/pypi/simple
-  ```
-- 从源码安装:
-  ```bash
-  uv pip install -e .[all] -i https://mirrors.aliyun.com/pypi/simple
-  ```
-  
-> [!TIP]
-> sglang安装过程中如发生异常,请参考[sglang官方文档](https://docs.sglang.ai/start/install.html)尝试解决或直接使用docker方式安装。
-
-- 使用 Dockerfile 构建镜像:
-  ```bash
-  wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/Dockerfile
-  docker build -t mineru-sglang:latest -f Dockerfile .
-  ```
-  启动 Docker 容器:
-  ```bash
-  docker run --gpus all \
-    --shm-size 32g \
-    -p 30000:30000 \
-    --ipc=host \
-    mineru-sglang:latest \
-    mineru-sglang-server --host 0.0.0.0 --port 30000
-  ```
-  或使用 Docker Compose 启动:
-  ```bash
-    wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
-    docker compose -f compose.yaml up -d
-  ```
-  
-> [!TIP]
-> Dockerfile默认使用`lmsysorg/sglang:v0.4.8.post1-cu126`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper平台,
-> 如您使用较新的`Blackwell`平台,请将基础镜像修改为`lmsysorg/sglang:v0.4.8.post1-cu128-b200`。
-
-### 安装client(用于在仅需 CPU 和网络连接的边缘设备上连接 sglang-server)
-
-```bash
-uv pip install -U mineru -i https://mirrors.aliyun.com/pypi/simple
-mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<host_ip>:<port>
-```
-
----

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 0 - 2
docs/zh/quick_start/online_demo.md


+ 0 - 9
docs/zh/todo.md

@@ -1,9 +0,0 @@
-# TODO
-
-- [x] 基于模型的阅读顺序  
-- [x] 正文中目录、列表识别  
-- [x] 表格识别
-- [x] 标题分级
-- [ ] 正文中代码块识别
-- [ ] [化学式识别](../chemical_knowledge_introduction/introduction.pdf)
-- [ ] 几何图形识别

+ 56 - 0
docs/zh/usage/advanced_cli_parameters.md

@@ -0,0 +1,56 @@
+# 命令行参数进阶技巧
+
+## SGLang 加速参数优化
+
+### 显存优化参数
+> [!TIP]
+>sglang加速模式目前支持在最低8G显存的Turing架构显卡上运行,但在显存<24G的显卡上可能会遇到显存不足的问题, 可以通过使用以下参数来优化显存使用:
+>- 如果您使用单张显卡遇到显存不足的情况时,可能需要调低KV缓存大小,`--mem-fraction-static 0.5`,如仍出现显存不足问题,可尝试进一步降低到`0.4`或更低。
+>- 如您有两张以上显卡,可尝试通过张量并行(TP)模式简单扩充可用显存:`--tp-size 2`
+
+### 性能优化参数
+> [!TIP]
+>如果您已经可以正常使用sglang对vlm模型进行加速推理,但仍然希望进一步提升推理速度,可以尝试以下参数:
+>- 如果您有超过多张显卡,可以使用sglang的多卡并行模式来增加吞吐量:`--dp-size 2`
+>- 同时您可以启用`torch.compile`来将推理速度加速约15%:`--enable-torch-compile`
+
+### 参数传递说明
+> [!TIP]
+>- 如果您想了解更多有关`sglang`的参数使用方法,请参考 [sglang官方文档](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
+>- 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`
+
+## GPU 设备选择与配置
+
+### CUDA_VISIBLE_DEVICES 基本用法
+> [!TIP]
+> - 任何情况下,您都可以通过在命令行的开头添加`CUDA_VISIBLE_DEVICES` 环境变量来指定可见的 GPU 设备。例如:
+>   ```bash
+>   CUDA_VISIBLE_DEVICES=1 mineru -p <input_path> -o <output_path>
+>   ```
+> - 这种指定方式对所有的命令行调用都有效,包括 `mineru`、`mineru-sglang-server`、`mineru-gradio` 和 `mineru-api`,且对`pipeline`、`vlm`后端均适用。
+
+### 常见设备配置示例
+> [!TIP]
+> - 以下是一些常见的 `CUDA_VISIBLE_DEVICES` 设置示例:
+>   ```bash
+>   CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
+>   CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
+>   CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional
+>   CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
+>   CUDA_VISIBLE_DEVICES="" No GPU will be visible
+>   ```
+
+### 实际应用场景
+> [!TIP]
+>以下是一些可能的使用场景:
+>- 如果您有多张显卡,需要指定卡0和卡1,并使用多卡并行来启动'sglang-server',可以使用以下命令:
+>  ```bash
+>  CUDA_VISIBLE_DEVICES=0,1 mineru-sglang-server --port 30000 --dp-size 2
+>  ```
+>- 如果您有多张显卡,需要在卡0和卡1上启动两个`fastapi`服务,并分别监听不同的端口,可以使用以下命令:
+>  ```bash
+>  # 在终端1中
+>  CUDA_VISIBLE_DEVICES=0 mineru-api --host 127.0.0.1 --port 8000
+>  # 在终端2中
+>  CUDA_VISIBLE_DEVICES=1 mineru-api --host 127.0.0.1 --port 8001
+>  ```

+ 0 - 57
docs/zh/usage/api.md

@@ -1,57 +0,0 @@
-# API 调用 或 可视化调用
-
-1. 使用python api直接调用:[Python 调用示例](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
-2. 使用fast api方式调用:
-    ```bash
-    mineru-api --host 127.0.0.1 --port 8000
-    ```
-    在浏览器中访问 http://127.0.0.1:8000/docs 查看API文档。
-
-3. 使用gradio webui 或 gradio api调用
-    ```bash
-    # 使用 pipeline/vlm-transformers/vlm-sglang-client 后端
-    mineru-gradio --server-name 127.0.0.1 --server-port 7860
-    # 或使用 vlm-sglang-engine/pipeline 后端
-    mineru-gradio --server-name 127.0.0.1 --server-port 7860 --enable-sglang-engine true
-    ```
-    在浏览器中访问 http://127.0.0.1:7860 使用 Gradio WebUI 或访问 http://127.0.0.1:7860/?view=api 使用 Gradio API。
-
-> [!TIP]
-> - 以下是一些使用sglang加速模式的建议和注意事项:
-> - sglang加速模式目前支持在最低8G显存的Turing架构显卡上运行,但在显存<24G的显卡上可能会遇到显存不足的问题, 可以通过使用以下参数来优化显存使用:
->   - 如果您使用单张显卡遇到显存不足的情况时,可能需要调低KV缓存大小,`--mem-fraction-static 0.5`,如仍出现显存不足问题,可尝试进一步降低到`0.4`或更低。
->   - 如您有两张以上显卡,可尝试通过张量并行(TP)模式简单扩充可用显存:`--tp-size 2`
-> - 如果您已经可以正常使用sglang对vlm模型进行加速推理,但仍然希望进一步提升推理速度,可以尝试以下参数:
->   - 如果您有超过多张显卡,可以使用sglang的多卡并行模式来增加吞吐量:`--dp-size 2`
->   - 同时您可以启用`torch.compile`来将推理速度加速约15%:`--enable-torch-compile`
-> - 如果您想了解更多有关`sglang`的参数使用方法,请参考 [sglang官方文档](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
-> - 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`
-
-> [!TIP]
-> - 任何情况下,您都可以通过在命令行的开头添加`CUDA_VISIBLE_DEVICES` 环境变量来指定可见的 GPU 设备。例如:
->   ```bash
->   CUDA_VISIBLE_DEVICES=1 mineru -p <input_path> -o <output_path>
->   ```
-> - 这种指定方式对所有的命令行调用都有效,包括 `mineru`、`mineru-sglang-server`、`mineru-gradio` 和 `mineru-api`,且对`pipeline`、`vlm`后端均适用。
-> - 以下是一些常见的 `CUDA_VISIBLE_DEVICES` 设置示例:
->   ```bash
->   CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
->   CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
->   CUDA_VISIBLE_DEVICES=“0,1” Same as above, quotation marks are optional
->   CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
->   CUDA_VISIBLE_DEVICES="" No GPU will be visible
->   ```
-> - 以下是一些可能的使用场景:
->   - 如果您有多张显卡,需要指定卡0和卡1,并使用多卡并行来启动'sglang-server',可以使用以下命令:
->   ```bash
->   CUDA_VISIBLE_DEVICES=0,1 mineru-sglang-server --port 30000 --dp-size 2
->   ```
->   - 如果您有多张显卡,需要在卡0和卡1上启动两个`fastapi`服务,并分别监听不同的端口,可以使用以下命令:
->   ```bash
->   # 在终端1中
->   CUDA_VISIBLE_DEVICES=0 mineru-api --host 127.0.0.1 --port 8000
->   # 在终端2中
->   CUDA_VISIBLE_DEVICES=1 mineru-api --host 127.0.0.1 --port 8001
->   ```
-
----

+ 73 - 0
docs/zh/usage/cli_tools.md

@@ -0,0 +1,73 @@
+# 命令行工具使用说明
+
+## 查看帮助信息
+要查看 MinerU 命令行工具的帮助信息,可以使用 `--help` 参数。以下是各个命令行工具的帮助信息示例:
+```bash
+mineru --help
+Usage: mineru [OPTIONS]
+
+Options:
+  -v, --version                   显示版本并退出
+  -p, --path PATH                 输入文件路径或目录(必填)
+  -o, --output PATH               输出目录(必填)
+  -m, --method [auto|txt|ocr]     解析方法:auto(默认)、txt、ocr(仅用于 pipeline 后端)
+  -b, --backend [pipeline|vlm-transformers|vlm-sglang-engine|vlm-sglang-client]
+                                  解析后端(默认为 pipeline)
+  -l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|latin|arabic|east_slavic|cyrillic|devanagari]
+                                  指定文档语言(可提升 OCR 准确率,仅用于 pipeline 后端)
+  -u, --url TEXT                  当使用 sglang-client 时,需指定服务地址
+  -s, --start INTEGER             开始解析的页码(从 0 开始)
+  -e, --end INTEGER               结束解析的页码(从 0 开始)
+  -f, --formula BOOLEAN           是否启用公式解析(默认开启)
+  -t, --table BOOLEAN             是否启用表格解析(默认开启)
+  -d, --device TEXT               推理设备(如 cpu/cuda/cuda:0/npu/mps,仅 pipeline 后端)
+  --vram INTEGER                  单进程最大 GPU 显存占用(GB)(仅 pipeline 后端)
+  --source [huggingface|modelscope|local]
+                                  模型来源,默认 huggingface
+  --help                          显示帮助信息
+```
+```bash
+mineru-api --help
+Usage: mineru-api [OPTIONS]
+
+Options:
+  --host TEXT     Server host (default: 127.0.0.1)
+  --port INTEGER  Server port (default: 8000)
+  --reload        Enable auto-reload (development mode)
+  --help          Show this message and exit.
+```
+```bash
+mineru-gradio --help
+Usage: mineru-gradio [OPTIONS]
+
+Options:
+  --enable-example BOOLEAN        Enable example files for input.The example
+                                  files to be input need to be placed in the
+                                  `example` folder within the directory where
+                                  the command is currently executed.
+  --enable-sglang-engine BOOLEAN  Enable SgLang engine backend for faster
+                                  processing.
+  --enable-api BOOLEAN            Enable gradio API for serving the
+                                  application.
+  --max-convert-pages INTEGER     Set the maximum number of pages to convert
+                                  from PDF to Markdown.
+  --server-name TEXT              Set the server name for the Gradio app.
+  --server-port INTEGER           Set the server port for the Gradio app.
+  --latex-delimiters-type [a|b|all]
+                                  Set the type of LaTeX delimiters to use in
+                                  Markdown rendering:'a' for type '$', 'b' for
+                                  type '()[]', 'all' for both types.
+  --help                          Show this message and exit.
+```
+
+## 环境变量说明
+
+MinerU命令行工具的某些参数存在相同功能的环境变量配置,通常环境变量配置的优先级高于命令行参数,且在所有命令行工具中都生效。
+- `MINERU_DEVICE_MODE`:用于指定推理设备,支持`cpu/cuda/cuda:0/npu/mps`等设备类型,仅对`pipeline`后端生效。
+- `MINERU_VIRTUAL_VRAM_SIZE`:用于指定单进程最大 GPU 显存占用(GB),仅对`pipeline`后端生效。
+- `MINERU_MODEL_SOURCE`:用于指定模型来源,支持`huggingface/modelscope/local`,默认为`huggingface`,可通过环境变量切换为`modelscope`或使用本地模型。
+- `MINERU_TOOLS_CONFIG_JSON`:用于指定配置文件路径,默认为用户目录下的`mineru.json`,可通过环境变量指定其他配置文件路径。
+- `MINERU_FORMULA_ENABLE`:用于启用公式解析,默认为`true`,可通过环境变量设置为`false`来禁用公式解析。
+- `MINERU_TABLE_ENABLE`:用于启用表格解析,默认为`true`,可通过环境变量设置为`false`来禁用表格解析。
+
+

+ 0 - 11
docs/zh/usage/config.md

@@ -1,11 +0,0 @@
-
-# 基于配置文件扩展 MinerU 功能
-
-- MinerU 现已实现开箱即用,但也支持通过配置文件扩展功能。您可以在用户目录下创建 `mineru.json` 文件,添加自定义配置。
-- `mineru.json` 文件会在您使用内置模型下载命令 `mineru-models-download` 时自动生成,也可以通过将[配置模板文件](../../mineru.template.json)复制到用户目录下并重命名为 `mineru.json` 来创建。
-- 以下是一些可用的配置选项:
-  - `latex-delimiter-config`:用于配置 LaTeX 公式的分隔符,默认为`$`符号,可根据需要修改为其他符号或字符串。
-  - `llm-aided-config`:用于配置 LLM 辅助标题分级的相关参数,兼容所有支持`openai协议`的 LLM 模型,默认使用`阿里云百炼`的`qwen2.5-32b-instruct`模型,您需要自行配置 API 密钥并将`enable`设置为`true`来启用此功能。
-  - `models-dir`:用于指定本地模型存储目录,请为`pipeline`和`vlm`后端分别指定模型目录,指定目录后您可通过配置环境变量`export MINERU_MODEL_SOURCE=local`来使用本地模型。
-
----

+ 52 - 103
docs/zh/usage/index.md

@@ -1,125 +1,74 @@
 # 使用 MinerU
 
-## 命令行使用方式
-
-### 基础用法
-
-最简单的命令行调用方式如下:
-
-```bash
-mineru -p <input_path> -o <output_path>
-```
-
-- `<input_path>`:本地 PDF/图片 文件或目录(支持 pdf/png/jpg/jpeg/webp/gif)
-- `<output_path>`:输出目录
-
-### 查看帮助信息
-
-获取所有可用参数说明:
-
+## 快速配置模型源
+MinerU默认使用`huggingface`作为模型源,若用户网络无法访问`huggingface`,可以通过环境变量便捷地切换模型源为`modelscope`:
 ```bash
-mineru --help
-```
-
-### 参数详解
-
-```text
-Usage: mineru [OPTIONS]
-
-Options:
-  -v, --version                   显示版本并退出
-  -p, --path PATH                 输入文件路径或目录(必填)
-  -o, --output PATH               输出目录(必填)
-  -m, --method [auto|txt|ocr]     解析方法:auto(默认)、txt、ocr(仅用于 pipeline 后端)
-  -b, --backend [pipeline|vlm-transformers|vlm-sglang-engine|vlm-sglang-client]
-                                  解析后端(默认为 pipeline)
-  -l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|latin|arabic|east_slavic|cyrillic|devanagari]
-                                  指定文档语言(可提升 OCR 准确率,仅用于 pipeline 后端)
-  -u, --url TEXT                  当使用 sglang-client 时,需指定服务地址
-  -s, --start INTEGER             开始解析的页码(从 0 开始)
-  -e, --end INTEGER               结束解析的页码(从 0 开始)
-  -f, --formula BOOLEAN           是否启用公式解析(默认开启)
-  -t, --table BOOLEAN             是否启用表格解析(默认开启)
-  -d, --device TEXT               推理设备(如 cpu/cuda/cuda:0/npu/mps,仅 pipeline 后端)
-  --vram INTEGER                  单进程最大 GPU 显存占用(GB)(仅 pipeline 后端)
-  --source [huggingface|modelscope|local]
-                                  模型来源,默认 huggingface
-  --help                          显示帮助信息
+export MINERU_MODEL_SOURCE=modelscope
 ```
+有关模型源配置和自定义本地模型路径的更多信息,请参考文档中的[模型源说明](./model_source.md)。
 
 ---
 
-## 模型源配置
-
-MinerU 默认在首次运行时自动从 HuggingFace 下载所需模型。若无法访问 HuggingFace,可通过以下方式切换模型源:
-
-### 切换至 ModelScope 源
-
-```bash
-mineru -p <input_path> -o <output_path> --source modelscope
-```
-
-或设置环境变量:
-
+## 通过命令行快速使用
+MinerU内置了命令行工具,用户可以通过命令行快速使用MinerU进行PDF解析:
 ```bash
-export MINERU_MODEL_SOURCE=modelscope
+# 默认使用pipeline后端解析
 mineru -p <input_path> -o <output_path>
 ```
+- `<input_path>`:本地 PDF/图片 文件或目录
+- `<output_path>`:输出目录
 
-### 使用本地模型
-
-#### 1. 下载模型到本地
-
-```bash
-mineru-models-download --help
-```
-
-或使用交互式命令行工具选择模型下载:
-
-```bash
-mineru-models-download
-```
-
-下载完成后,模型路径会在当前终端窗口输出,并自动写入用户目录下的 `mineru.json`。
+> [!NOTE]
+> 命令行工具会在Linux和macOS系统自动尝试cuda/mps加速。Windows用户如需使用cuda加速,
+> 请前往 [Pytorch官网](https://pytorch.org/get-started/locally/) 选择适合自己cuda版本的命令安装支持加速的`torch`和`torchvision`。
 
-#### 2. 使用本地模型进行解析
+> [!TIP]
+> 更多关于输出文件的信息,请参考[输出文件说明](./output_file.md)。
 
 ```bash
-mineru -p <input_path> -o <output_path> --source local
+# 或指定vlm后端解析
+mineru -p <input_path> -o <output_path> -b vlm-transformers
 ```
+> [!TIP]
+> vlm后端另外支持`sglang`加速,与`transformers`后端相比,`sglang`的加速比可达20~30倍,可以在[扩展模块安装指南](../quick_start/extension_modules.md)中查看支持`sglang`加速的完整包安装方法。
 
-或通过环境变量启用:
-
-```bash
-export MINERU_MODEL_SOURCE=local
-mineru -p <input_path> -o <output_path>
-```
+如果需要通过自定义参数调整解析选项,您也可以在文档中查看更详细的[命令行工具使用说明](./cli_tools.md)。
 
 ---
 
-## 使用 sglang 加速 VLM 模型推理
-
-### 通过 sglang-engine 模式
-
-```bash
-mineru -p <input_path> -o <output_path> -b vlm-sglang-engine
-```
-
-### 通过 sglang-server/client 模式
-
-1. 启动 Server:
-
-```bash
-mineru-sglang-server --port 30000
-```
-
-2. 在另一个终端中使用 Client 调用:
+## 通过api、webui、sglang-client/server进阶使用
+
+- 通过python api直接调用:[Python 调用示例](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
+- 通过fast api方式调用:
+  ```bash
+  mineru-api --host 127.0.0.1 --port 8000
+  ```
+  在浏览器中访问 http://127.0.0.1:8000/docs 查看API文档。
+- 启动gradio webui 可视化前端:
+  ```bash
+  # 使用 pipeline/vlm-transformers/vlm-sglang-client 后端
+  mineru-gradio --server-name 127.0.0.1 --server-port 7860
+  # 或使用 vlm-sglang-engine/pipeline 后端(需安装sglang环境)
+  mineru-gradio --server-name 127.0.0.1 --server-port 7860 --enable-sglang-engine true
+  ```
+  在浏览器中访问 http://127.0.0.1:7860 使用 Gradio WebUI 或访问 http://127.0.0.1:7860/?view=api 使用 Gradio API。
+- 使用`sglang-client/server`方式调用:
+  ```bash
+  # 启动sglang server(需要安装sglang环境)
+  mineru-sglang-server --port 30000
+  # 在另一个终端中通过sglang client连接sglang server(只需cpu与网络,不需要sglang环境)
+  mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://127.0.0.1:30000
+  ``` 
+> [!TIP]
+> 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`,
+> 我们整理了一些`sglang`使用中的常用参数和使用方法,可以在文档[命令行参数进阶技巧](./advanced_cli_parameters.md)中获取。
 
-```bash
-mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://127.0.0.1:30000
-```
 
-> [!TIP]
-> 更多关于输出文件的信息,请参考 [输出文件说明](../output_file.md)
+## 基于配置文件扩展 MinerU 功能
 
----
+- MinerU 现已实现开箱即用,但也支持通过配置文件扩展功能。您可以在用户目录下创建 `mineru.json` 文件,添加自定义配置。
+- `mineru.json` 文件会在您使用内置模型下载命令 `mineru-models-download` 时自动生成,也可以通过将[配置模板文件](https://github.com/opendatalab/MinerU/blob/master/mineru.template.json)复制到用户目录下并重命名为 `mineru.json` 来创建。
+- 以下是一些可用的配置选项:
+  - `latex-delimiter-config`:用于配置 LaTeX 公式的分隔符,默认为`$`符号,可根据需要修改为其他符号或字符串。
+  - `llm-aided-config`:用于配置 LLM 辅助标题分级的相关参数,兼容所有支持`openai协议`的 LLM 模型,默认使用`阿里云百炼`的`qwen2.5-32b-instruct`模型,您需要自行配置 API 密钥并将`enable`设置为`true`来启用此功能。
+  - `models-dir`:用于指定本地模型存储目录,请为`pipeline`和`vlm`后端分别指定模型目录,指定目录后您可通过配置环境变量`export MINERU_MODEL_SOURCE=local`来使用本地模型。

+ 55 - 0
docs/zh/usage/model_source.md

@@ -0,0 +1,55 @@
+# 模型源说明
+
+MinerU使用 `HuggingFace` 和 `ModelScope` 作为模型仓库,用户可以根据需要切换模型源或使用本地模型。
+
+- `HuggingFace` 是默认的模型源,在全球范围内提供了优异的加载速度和极高稳定性。
+- `ModelScope` 是中国大陆地区用户的最佳选择,提供了无缝兼容`hf`的SDK模块,适用于无法访问HuggingFace的用户。
+
+## 模型源的切换方法
+
+### 通过命令行参数切换
+目前仅`mineru`命令行工具支持通过命令行参数切换模型源,其他命令行工具如`mineru-api`、`mineru-gradio`等暂不支持。
+```bash
+mineru -p <input_path> -o <output_path> --source modelscope
+```
+
+### 通过环境变量切换
+在任何情况下可以通过设置环境变量来切换模型源,这适用于所有命令行工具和API调用。
+```bash
+export MINERU_MODEL_SOURCE=modelscope
+```
+或
+```python
+import os
+os.environ["MINERU_MODEL_SOURCE"] = "modelscope"
+```
+>[!TIP]
+> 通过环境变量设置的模型源会在当前终端会话中生效,直到终端关闭或环境变量被修改。且优先级高于命令行参数,如同时设置了命令行参数和环境变量,命令行参数将被忽略。
+
+
+## 使用本地模型
+
+### 1. 下载模型到本地
+```bash
+mineru-models-download --help
+```
+或使用交互式命令行工具选择模型下载:
+```bash
+mineru-models-download
+```
+>[!TIP]
+>- 下载完成后,模型路径会在当前终端窗口输出,并自动写入用户目录下的 `mineru.json`。
+>- 模型下载到本地后,您可以自由移动模型文件夹到其他位置,同时需要在 `mineru.json` 中更新模型路径。
+>- 如您将模型文件夹部署到其他服务器上,请确保将 `mineru.json`文件一同移动到新设备的用户目录中并正确配置模型路径。
+>- 如您需要更新模型文件,可以再次运行 `mineru-models-download` 命令,模型更新暂不支持自定义路径,如您没有移动本地模型文件夹,模型文件会增量更新;如您移动了模型文件夹,模型文件会重新下载到默认位置并更新`mineru.json`。
+
+### 2. 使用本地模型进行解析
+
+```bash
+mineru -p <input_path> -o <output_path> --source local
+```
+或通过环境变量启用:
+```bash
+export MINERU_MODEL_SOURCE=local
+mineru -p <input_path> -o <output_path>
+```

+ 0 - 0
docs/zh/output_file.md → docs/zh/usage/output_file.md


+ 12 - 10
mkdocs.yml

@@ -49,15 +49,16 @@ nav:
     - "MinerU": index.md
     - Quick Start:
       - quick_start/index.md
-      - Online Demo: quick_start/online_demo.md
-      - Local Deployment: quick_start/local_deployment.md
+      - Extension Modules: quick_start/extension_modules.md
+      - Docker Deployment: quick_start/docker_deployment.md
     - Usage:
       - usage/index.md
-      - API Calls or Visual Invocation: usage/api.md
-      - Extending MinerU Functionality Through Configuration Files: usage/config.md
+      - CLI Tools: usage/cli_tools.md
+      - Model Source: usage/model_source.md
+      - Advanced CLI Parameters: usage/advanced_cli_parameters.md
+      - Output File Format: usage/output_file.md
   - FAQ:
       - FAQ: FAQ/index.md
-  - Output File Format: output_file.md
   - Known Issues: known_issues.md
   - TODO: todo.md
 
@@ -76,14 +77,15 @@ plugins:
           nav_translations:
             Home: 主页
             Quick Start: 快速开始
-            Online Demo: 在线体验
-            Local Deployment: 本地部署
+            Extension Modules: 扩展模块
+            Docker Deployment: Docker部署
             Usage: 使用方法
-            API Calls or Visual Invocation: API 调用 或 可视化调用
-            Extending MinerU Functionality Through Configuration Files: 基于配置文件扩展 MinerU 功能
+            CLI Tools: 命令行工具
+            Model Source: 模型源
+            Advanced CLI Parameters: 命令行参数进阶技巧
             FAQ: FAQ
             Output File Format: 输出文件格式
-            Known Issues: Known Issues
+            Known Issues: 已知问题
             TODO: TODO
   - mkdocs-video
 

Vissa filer visades inte eftersom för många filer har ändrats