浏览代码

fix: add horizontal rules for better section separation in documentation

myhloli 4 月之前
父节点
当前提交
3f616ec64b

+ 10 - 3
docs/en/quick_start/docker_deployment.md

@@ -13,6 +13,8 @@ docker build -t mineru-sglang:latest -f Dockerfile .
 > The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `lmsysorg/sglang:v0.4.8.post1-cu126` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper platforms.
 > The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `lmsysorg/sglang:v0.4.8.post1-cu126` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper platforms.
 > If you are using the newer `Blackwell` platform, please modify the base image to `lmsysorg/sglang:v0.4.8.post1-cu128-b200` before executing the build operation.
 > If you are using the newer `Blackwell` platform, please modify the base image to `lmsysorg/sglang:v0.4.8.post1-cu128-b200` before executing the build operation.
 
 
+---
+
 ## Docker Description
 ## Docker Description
 
 
 MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sglang` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `sglang` to accelerate VLM model inference.
 MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sglang` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `sglang` to accelerate VLM model inference.
@@ -26,6 +28,8 @@ MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sg
 >
 >
 > If your device doesn't meet the above requirements, you can still use other features of MinerU, but cannot use `sglang` to accelerate VLM model inference, meaning you cannot use the `vlm-sglang-engine` backend or start the `vlm-sglang-server` service.
 > If your device doesn't meet the above requirements, you can still use other features of MinerU, but cannot use `sglang` to accelerate VLM model inference, meaning you cannot use the `vlm-sglang-engine` backend or start the `vlm-sglang-server` service.
 
 
+---
+
 ## Start Docker Container:
 ## Start Docker Container:
 
 
 ```bash
 ```bash
@@ -40,6 +44,8 @@ docker run --gpus all \
 After executing this command, you will enter the Docker container's interactive terminal with some ports mapped for potential services. You can directly run MinerU-related commands within the container to use MinerU's features.
 After executing this command, you will enter the Docker container's interactive terminal with some ports mapped for potential services. You can directly run MinerU-related commands within the container to use MinerU's features.
 You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [MinerU Usage Documentation](../usage/index.md).
 You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [MinerU Usage Documentation](../usage/index.md).
 
 
+---
+
 ## Start Services Directly with Docker Compose
 ## Start Services Directly with Docker Compose
 
 
 We provide a [compose.yaml](https://github.com/opendatalab/MinerU/blob/master/docker/compose.yaml) file that you can use to quickly start MinerU services.
 We provide a [compose.yaml](https://github.com/opendatalab/MinerU/blob/master/docker/compose.yaml) file that you can use to quickly start MinerU services.
@@ -55,7 +61,8 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
 >- Different services might have additional parameter configurations, which you can view and edit in the `compose.yaml` file.
 >- Different services might have additional parameter configurations, which you can view and edit in the `compose.yaml` file.
 >- Due to the pre-allocation of GPU memory by the `sglang` inference acceleration framework, you may not be able to run multiple `sglang` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-sglang-server` service or using the `vlm-sglang-engine` backend.
 >- Due to the pre-allocation of GPU memory by the `sglang` inference acceleration framework, you may not be able to run multiple `sglang` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-sglang-server` service or using the `vlm-sglang-engine` backend.
 
 
-- Start `sglang-server` service and connect to `sglang-server` via `vlm-sglang-client` backend:
+### Start sglang-server service
+connect to `sglang-server` via `vlm-sglang-client` backend
   ```bash
   ```bash
   docker compose -f compose.yaml --profile mineru-sglang-server up -d
   docker compose -f compose.yaml --profile mineru-sglang-server up -d
   ```
   ```
@@ -65,14 +72,14 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
   > mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
   > mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
   > ```
   > ```
 
 
-- Start API service:
+### Start Web API service
   ```bash
   ```bash
   docker compose -f compose.yaml --profile mineru-api up -d
   docker compose -f compose.yaml --profile mineru-api up -d
   ```
   ```
   >[!TIP]
   >[!TIP]
   >Access `http://<server_ip>:8000/docs` in your browser to view the API documentation.
   >Access `http://<server_ip>:8000/docs` in your browser to view the API documentation.
 
 
-- Start Gradio WebUI service:
+### Start Gradio WebUI service
   ```bash
   ```bash
   docker compose -f compose.yaml --profile mineru-gradio up -d
   docker compose -f compose.yaml --profile mineru-gradio up -d
   ```
   ```

+ 12 - 0
docs/en/usage/advanced_cli_parameters.md

@@ -1,5 +1,7 @@
 # Advanced Command Line Parameters
 # Advanced Command Line Parameters
 
 
+---
+
 ## SGLang Acceleration Parameter Optimization
 ## SGLang Acceleration Parameter Optimization
 
 
 ### Memory Optimization Parameters
 ### Memory Optimization Parameters
@@ -9,6 +11,8 @@
 > - If you encounter insufficient VRAM when using a single graphics card, you may need to reduce the KV cache size with `--mem-fraction-static 0.5`. If VRAM issues persist, try reducing it further to `0.4` or lower.
 > - If you encounter insufficient VRAM when using a single graphics card, you may need to reduce the KV cache size with `--mem-fraction-static 0.5`. If VRAM issues persist, try reducing it further to `0.4` or lower.
 > - If you have two or more graphics cards, you can try using tensor parallelism (TP) mode to simply expand available VRAM: `--tp-size 2`
 > - If you have two or more graphics cards, you can try using tensor parallelism (TP) mode to simply expand available VRAM: `--tp-size 2`
 
 
+---
+
 ### Performance Optimization Parameters
 ### Performance Optimization Parameters
 > [!TIP]
 > [!TIP]
 > If you can already use SGLang normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
 > If you can already use SGLang normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
@@ -16,11 +20,15 @@
 > - If you have multiple graphics cards, you can use SGLang's multi-card parallel mode to increase throughput: `--dp-size 2`
 > - If you have multiple graphics cards, you can use SGLang's multi-card parallel mode to increase throughput: `--dp-size 2`
 > - You can also enable `torch.compile` to accelerate inference speed by approximately 15%: `--enable-torch-compile`
 > - You can also enable `torch.compile` to accelerate inference speed by approximately 15%: `--enable-torch-compile`
 
 
+---
+
 ### Parameter Passing Instructions
 ### Parameter Passing Instructions
 > [!TIP]
 > [!TIP]
 > - All officially supported SGLang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`
 > - All officially supported SGLang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`
 > - If you want to learn more about `sglang` parameter usage, please refer to the [SGLang official documentation](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 > - If you want to learn more about `sglang` parameter usage, please refer to the [SGLang official documentation](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 
 
+---
+
 ## GPU Device Selection and Configuration
 ## GPU Device Selection and Configuration
 
 
 ### CUDA_VISIBLE_DEVICES Basic Usage
 ### CUDA_VISIBLE_DEVICES Basic Usage
@@ -31,6 +39,8 @@
 >   ```
 >   ```
 > - This specification method is effective for all command line calls, including `mineru`, `mineru-sglang-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
 > - This specification method is effective for all command line calls, including `mineru`, `mineru-sglang-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
 
 
+---
+
 ### Common Device Configuration Examples
 ### Common Device Configuration Examples
 > [!TIP]
 > [!TIP]
 > Here are some common `CUDA_VISIBLE_DEVICES` setting examples:
 > Here are some common `CUDA_VISIBLE_DEVICES` setting examples:
@@ -42,6 +52,8 @@
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   ```
 >   ```
 
 
+---
+
 ## Practical Application Scenarios
 ## Practical Application Scenarios
 > [!TIP]
 > [!TIP]
 > Here are some possible usage scenarios:
 > Here are some possible usage scenarios:

+ 2 - 0
docs/en/usage/index.md

@@ -72,6 +72,8 @@ If you need to adjust parsing options through custom parameters, you can also ch
 > All officially supported sglang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`.
 > All officially supported sglang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`.
 > We have compiled some commonly used parameters and usage methods for `sglang`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
 > We have compiled some commonly used parameters and usage methods for `sglang`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
 
 
+---
+
 ## Extending MinerU Functionality with Configuration Files
 ## Extending MinerU Functionality with Configuration Files
 
 
 MinerU is now ready to use out of the box, but also supports extending functionality through configuration files. You can edit `mineru.json` file in your user directory to add custom configurations.  
 MinerU is now ready to use out of the box, but also supports extending functionality through configuration files. You can edit `mineru.json` file in your user directory to add custom configurations.  

+ 9 - 3
docs/zh/quick_start/docker_deployment.md

@@ -13,6 +13,8 @@ docker build -t mineru-sglang:latest -f Dockerfile .
 > [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`lmsysorg/sglang:v0.4.8.post1-cu126`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper平台,
 > [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`lmsysorg/sglang:v0.4.8.post1-cu126`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper平台,
 > 如您使用较新的`Blackwell`平台,请将基础镜像修改为`lmsysorg/sglang:v0.4.8.post1-cu128-b200` 再执行build操作。
 > 如您使用较新的`Blackwell`平台,请将基础镜像修改为`lmsysorg/sglang:v0.4.8.post1-cu128-b200` 再执行build操作。
 
 
+---
+
 ## Docker说明
 ## Docker说明
 
 
 Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中默认集成了`sglang`推理加速框架和必需的依赖环境。因此在满足条件的设备上,您可以直接使用`sglang`加速VLM模型推理。
 Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中默认集成了`sglang`推理加速框架和必需的依赖环境。因此在满足条件的设备上,您可以直接使用`sglang`加速VLM模型推理。
@@ -25,6 +27,8 @@ Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中
 >
 >
 > 如果您的设备不满足上述条件,您仍然可以使用MinerU的其他功能,但无法使用`sglang`加速VLM模型推理,即无法使用`vlm-sglang-engine`后端和启动`vlm-sglang-server`服务。
 > 如果您的设备不满足上述条件,您仍然可以使用MinerU的其他功能,但无法使用`sglang`加速VLM模型推理,即无法使用`vlm-sglang-engine`后端和启动`vlm-sglang-server`服务。
 
 
+---
+
 ## 启动 Docker 容器:
 ## 启动 Docker 容器:
 
 
 ```bash
 ```bash
@@ -39,6 +43,7 @@ docker run --gpus all \
 执行该命令后,您将进入到Docker容器的交互式终端,并映射了一些端口用于可能会使用的服务,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
 执行该命令后,您将进入到Docker容器的交互式终端,并映射了一些端口用于可能会使用的服务,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
 您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[MinerU使用文档](../usage/index.md)。
 您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[MinerU使用文档](../usage/index.md)。
 
 
+---
 
 
 ## 通过 Docker Compose 直接启动服务
 ## 通过 Docker Compose 直接启动服务
 
 
@@ -54,7 +59,8 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
 >- 不同的服务可能会有额外的参数配置,您可以在`compose.yaml`文件中查看并编辑。
 >- 不同的服务可能会有额外的参数配置,您可以在`compose.yaml`文件中查看并编辑。
 >- 由于`sglang`推理加速框架预分配显存的特性,您可能无法在同一台机器上同时运行多个`sglang`服务,因此请确保在启动`vlm-sglang-server`服务或使用`vlm-sglang-engine`后端时,其他可能使用显存的服务已停止。
 >- 由于`sglang`推理加速框架预分配显存的特性,您可能无法在同一台机器上同时运行多个`sglang`服务,因此请确保在启动`vlm-sglang-server`服务或使用`vlm-sglang-engine`后端时,其他可能使用显存的服务已停止。
 
 
-- 启动`sglang-server`服务,并通过`vlm-sglang-client`后端连接`sglang-server`:
+### 启动 sglang-server 服务
+并通过`vlm-sglang-client`后端连接`sglang-server`
   ```bash
   ```bash
   docker compose -f compose.yaml --profile mineru-sglang-server up -d
   docker compose -f compose.yaml --profile mineru-sglang-server up -d
   ```
   ```
@@ -64,14 +70,14 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
   > mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
   > mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
   > ```
   > ```
 
 
-- 启动 API 服务:
+### 启动 Web API 服务
   ```bash
   ```bash
   docker compose -f compose.yaml --profile mineru-api up -d
   docker compose -f compose.yaml --profile mineru-api up -d
   ```
   ```
   >[!TIP]
   >[!TIP]
   >在浏览器中访问 `http://<server_ip>:8000/docs` 查看API文档。
   >在浏览器中访问 `http://<server_ip>:8000/docs` 查看API文档。
 
 
-- 启动 Gradio WebUI 服务:
+### 启动 Gradio WebUI 服务
   ```bash
   ```bash
   docker compose -f compose.yaml --profile mineru-gradio up -d
   docker compose -f compose.yaml --profile mineru-gradio up -d
   ```
   ```

+ 12 - 0
docs/zh/usage/advanced_cli_parameters.md

@@ -1,5 +1,7 @@
 # 命令行参数进阶技巧
 # 命令行参数进阶技巧
 
 
+---
+
 ## SGLang 加速参数优化
 ## SGLang 加速参数优化
 
 
 ### 显存优化参数
 ### 显存优化参数
@@ -9,6 +11,8 @@
 > - 如果您使用单张显卡遇到显存不足的情况时,可能需要调低KV缓存大小,`--mem-fraction-static 0.5`,如仍出现显存不足问题,可尝试进一步降低到`0.4`或更低
 > - 如果您使用单张显卡遇到显存不足的情况时,可能需要调低KV缓存大小,`--mem-fraction-static 0.5`,如仍出现显存不足问题,可尝试进一步降低到`0.4`或更低
 > - 如您有两张以上显卡,可尝试通过张量并行(TP)模式简单扩充可用显存:`--tp-size 2`
 > - 如您有两张以上显卡,可尝试通过张量并行(TP)模式简单扩充可用显存:`--tp-size 2`
 
 
+---
+
 ### 性能优化参数
 ### 性能优化参数
 > [!TIP]
 > [!TIP]
 > 如果您已经可以正常使用sglang对vlm模型进行加速推理,但仍然希望进一步提升推理速度,可以尝试以下参数:
 > 如果您已经可以正常使用sglang对vlm模型进行加速推理,但仍然希望进一步提升推理速度,可以尝试以下参数:
@@ -16,11 +20,15 @@
 > - 如果您有超过多张显卡,可以使用sglang的多卡并行模式来增加吞吐量:`--dp-size 2`
 > - 如果您有超过多张显卡,可以使用sglang的多卡并行模式来增加吞吐量:`--dp-size 2`
 > - 同时您可以启用`torch.compile`来将推理速度加速约15%:`--enable-torch-compile`
 > - 同时您可以启用`torch.compile`来将推理速度加速约15%:`--enable-torch-compile`
 
 
+---
+
 ### 参数传递说明
 ### 参数传递说明
 > [!TIP]
 > [!TIP]
 > - 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`
 > - 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`
 > - 如果您想了解更多有关`sglang`的参数使用方法,请参考 [sglang官方文档](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 > - 如果您想了解更多有关`sglang`的参数使用方法,请参考 [sglang官方文档](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 
 
+---
+
 ## GPU 设备选择与配置
 ## GPU 设备选择与配置
 
 
 ### CUDA_VISIBLE_DEVICES 基本用法
 ### CUDA_VISIBLE_DEVICES 基本用法
@@ -31,6 +39,8 @@
 >   ```
 >   ```
 > - 这种指定方式对所有的命令行调用都有效,包括 `mineru`、`mineru-sglang-server`、`mineru-gradio` 和 `mineru-api`,且对`pipeline`、`vlm`后端均适用。
 > - 这种指定方式对所有的命令行调用都有效,包括 `mineru`、`mineru-sglang-server`、`mineru-gradio` 和 `mineru-api`,且对`pipeline`、`vlm`后端均适用。
 
 
+---
+
 ### 常见设备配置示例
 ### 常见设备配置示例
 > [!TIP]
 > [!TIP]
 > 以下是一些常见的 `CUDA_VISIBLE_DEVICES` 设置示例:
 > 以下是一些常见的 `CUDA_VISIBLE_DEVICES` 设置示例:
@@ -42,6 +52,8 @@
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   ```
 >   ```
 
 
+---
+
 ## 实际应用场景
 ## 实际应用场景
 
 
 > [!TIP]
 > [!TIP]

+ 1 - 0
docs/zh/usage/index.md

@@ -72,6 +72,7 @@ mineru -p <input_path> -o <output_path> -b vlm-transformers
 > 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`,
 > 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`,
 > 我们整理了一些`sglang`使用中的常用参数和使用方法,可以在文档[命令行进阶参数](./advanced_cli_parameters.md)中获取。
 > 我们整理了一些`sglang`使用中的常用参数和使用方法,可以在文档[命令行进阶参数](./advanced_cli_parameters.md)中获取。
 
 
+---
 
 
 ## 基于配置文件扩展 MinerU 功能
 ## 基于配置文件扩展 MinerU 功能