Browse Source

Merge pull request #3071 from myhloli/dev

fix: remove unnecessary horizontal rules for improved readability in documentation
Xiaomeng Zhao 4 months ago
parent
commit
c70c4cc9fb

+ 0 - 6
docs/en/quick_start/docker_deployment.md

@@ -13,8 +13,6 @@ docker build -t mineru-sglang:latest -f Dockerfile .
 > The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `lmsysorg/sglang:v0.4.8.post1-cu126` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper platforms.
 > The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `lmsysorg/sglang:v0.4.8.post1-cu126` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper platforms.
 > If you are using the newer `Blackwell` platform, please modify the base image to `lmsysorg/sglang:v0.4.8.post1-cu128-b200` before executing the build operation.
 > If you are using the newer `Blackwell` platform, please modify the base image to `lmsysorg/sglang:v0.4.8.post1-cu128-b200` before executing the build operation.
 
 
----
-
 ## Docker Description
 ## Docker Description
 
 
 MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sglang` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `sglang` to accelerate VLM model inference.
 MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sglang` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `sglang` to accelerate VLM model inference.
@@ -28,8 +26,6 @@ MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sg
 >
 >
 > If your device doesn't meet the above requirements, you can still use other features of MinerU, but cannot use `sglang` to accelerate VLM model inference, meaning you cannot use the `vlm-sglang-engine` backend or start the `vlm-sglang-server` service.
 > If your device doesn't meet the above requirements, you can still use other features of MinerU, but cannot use `sglang` to accelerate VLM model inference, meaning you cannot use the `vlm-sglang-engine` backend or start the `vlm-sglang-server` service.
 
 
----
-
 ## Start Docker Container:
 ## Start Docker Container:
 
 
 ```bash
 ```bash
@@ -44,8 +40,6 @@ docker run --gpus all \
 After executing this command, you will enter the Docker container's interactive terminal with some ports mapped for potential services. You can directly run MinerU-related commands within the container to use MinerU's features.
 After executing this command, you will enter the Docker container's interactive terminal with some ports mapped for potential services. You can directly run MinerU-related commands within the container to use MinerU's features.
 You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [MinerU Usage Documentation](../usage/index.md).
 You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [MinerU Usage Documentation](../usage/index.md).
 
 
----
-
 ## Start Services Directly with Docker Compose
 ## Start Services Directly with Docker Compose
 
 
 We provide a [compose.yaml](https://github.com/opendatalab/MinerU/blob/master/docker/compose.yaml) file that you can use to quickly start MinerU services.
 We provide a [compose.yaml](https://github.com/opendatalab/MinerU/blob/master/docker/compose.yaml) file that you can use to quickly start MinerU services.

+ 0 - 12
docs/en/usage/advanced_cli_parameters.md

@@ -1,7 +1,5 @@
 # Advanced Command Line Parameters
 # Advanced Command Line Parameters
 
 
----
-
 ## SGLang Acceleration Parameter Optimization
 ## SGLang Acceleration Parameter Optimization
 
 
 ### Memory Optimization Parameters
 ### Memory Optimization Parameters
@@ -11,8 +9,6 @@
 > - If you encounter insufficient VRAM when using a single graphics card, you may need to reduce the KV cache size with `--mem-fraction-static 0.5`. If VRAM issues persist, try reducing it further to `0.4` or lower.
 > - If you encounter insufficient VRAM when using a single graphics card, you may need to reduce the KV cache size with `--mem-fraction-static 0.5`. If VRAM issues persist, try reducing it further to `0.4` or lower.
 > - If you have two or more graphics cards, you can try using tensor parallelism (TP) mode to simply expand available VRAM: `--tp-size 2`
 > - If you have two or more graphics cards, you can try using tensor parallelism (TP) mode to simply expand available VRAM: `--tp-size 2`
 
 
----
-
 ### Performance Optimization Parameters
 ### Performance Optimization Parameters
 > [!TIP]
 > [!TIP]
 > If you can already use SGLang normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
 > If you can already use SGLang normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
@@ -20,15 +16,11 @@
 > - If you have multiple graphics cards, you can use SGLang's multi-card parallel mode to increase throughput: `--dp-size 2`
 > - If you have multiple graphics cards, you can use SGLang's multi-card parallel mode to increase throughput: `--dp-size 2`
 > - You can also enable `torch.compile` to accelerate inference speed by approximately 15%: `--enable-torch-compile`
 > - You can also enable `torch.compile` to accelerate inference speed by approximately 15%: `--enable-torch-compile`
 
 
----
-
 ### Parameter Passing Instructions
 ### Parameter Passing Instructions
 > [!TIP]
 > [!TIP]
 > - All officially supported SGLang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`
 > - All officially supported SGLang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`
 > - If you want to learn more about `sglang` parameter usage, please refer to the [SGLang official documentation](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 > - If you want to learn more about `sglang` parameter usage, please refer to the [SGLang official documentation](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 
 
----
-
 ## GPU Device Selection and Configuration
 ## GPU Device Selection and Configuration
 
 
 ### CUDA_VISIBLE_DEVICES Basic Usage
 ### CUDA_VISIBLE_DEVICES Basic Usage
@@ -39,8 +31,6 @@
 >   ```
 >   ```
 > - This specification method is effective for all command line calls, including `mineru`, `mineru-sglang-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
 > - This specification method is effective for all command line calls, including `mineru`, `mineru-sglang-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
 
 
----
-
 ### Common Device Configuration Examples
 ### Common Device Configuration Examples
 > [!TIP]
 > [!TIP]
 > Here are some common `CUDA_VISIBLE_DEVICES` setting examples:
 > Here are some common `CUDA_VISIBLE_DEVICES` setting examples:
@@ -52,8 +42,6 @@
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   ```
 >   ```
 
 
----
-
 ## Practical Application Scenarios
 ## Practical Application Scenarios
 > [!TIP]
 > [!TIP]
 > Here are some possible usage scenarios:
 > Here are some possible usage scenarios:

+ 0 - 6
docs/en/usage/quick_usage.md

@@ -7,8 +7,6 @@ export MINERU_MODEL_SOURCE=modelscope
 ```
 ```
 For more information about model source configuration and custom local model paths, please refer to the [Model Source Documentation](./model_source.md) in the documentation.
 For more information about model source configuration and custom local model paths, please refer to the [Model Source Documentation](./model_source.md) in the documentation.
 
 
----
-
 ## Quick Usage via Command Line
 ## Quick Usage via Command Line
 MinerU has built-in command line tools that allow users to quickly use MinerU for PDF parsing through the command line:
 MinerU has built-in command line tools that allow users to quickly use MinerU for PDF parsing through the command line:
 ```bash
 ```bash
@@ -35,8 +33,6 @@ mineru -p <input_path> -o <output_path> -b vlm-transformers
 
 
 If you need to adjust parsing options through custom parameters, you can also check the more detailed [Command Line Tools Usage Instructions](./cli_tools.md) in the documentation.
 If you need to adjust parsing options through custom parameters, you can also check the more detailed [Command Line Tools Usage Instructions](./cli_tools.md) in the documentation.
 
 
----
-
 ## Advanced Usage via API, WebUI, sglang-client/server
 ## Advanced Usage via API, WebUI, sglang-client/server
 
 
 - Direct Python API calls: [Python Usage Example](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
 - Direct Python API calls: [Python Usage Example](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
@@ -72,8 +68,6 @@ If you need to adjust parsing options through custom parameters, you can also ch
 > All officially supported sglang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`.
 > All officially supported sglang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`.
 > We have compiled some commonly used parameters and usage methods for `sglang`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
 > We have compiled some commonly used parameters and usage methods for `sglang`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
 
 
----
-
 ## Extending MinerU Functionality with Configuration Files
 ## Extending MinerU Functionality with Configuration Files
 
 
 MinerU is now ready to use out of the box, but also supports extending functionality through configuration files. You can edit `mineru.json` file in your user directory to add custom configurations.  
 MinerU is now ready to use out of the box, but also supports extending functionality through configuration files. You can edit `mineru.json` file in your user directory to add custom configurations.  

+ 0 - 6
docs/zh/quick_start/docker_deployment.md

@@ -13,8 +13,6 @@ docker build -t mineru-sglang:latest -f Dockerfile .
 > [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`lmsysorg/sglang:v0.4.8.post1-cu126`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper平台,
 > [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`lmsysorg/sglang:v0.4.8.post1-cu126`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper平台,
 > 如您使用较新的`Blackwell`平台,请将基础镜像修改为`lmsysorg/sglang:v0.4.8.post1-cu128-b200` 再执行build操作。
 > 如您使用较新的`Blackwell`平台,请将基础镜像修改为`lmsysorg/sglang:v0.4.8.post1-cu128-b200` 再执行build操作。
 
 
----
-
 ## Docker说明
 ## Docker说明
 
 
 Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中默认集成了`sglang`推理加速框架和必需的依赖环境。因此在满足条件的设备上,您可以直接使用`sglang`加速VLM模型推理。
 Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中默认集成了`sglang`推理加速框架和必需的依赖环境。因此在满足条件的设备上,您可以直接使用`sglang`加速VLM模型推理。
@@ -27,8 +25,6 @@ Mineru的docker使用了`lmsysorg/sglang`作为基础镜像,因此在docker中
 >
 >
 > 如果您的设备不满足上述条件,您仍然可以使用MinerU的其他功能,但无法使用`sglang`加速VLM模型推理,即无法使用`vlm-sglang-engine`后端和启动`vlm-sglang-server`服务。
 > 如果您的设备不满足上述条件,您仍然可以使用MinerU的其他功能,但无法使用`sglang`加速VLM模型推理,即无法使用`vlm-sglang-engine`后端和启动`vlm-sglang-server`服务。
 
 
----
-
 ## 启动 Docker 容器:
 ## 启动 Docker 容器:
 
 
 ```bash
 ```bash
@@ -43,8 +39,6 @@ docker run --gpus all \
 执行该命令后,您将进入到Docker容器的交互式终端,并映射了一些端口用于可能会使用的服务,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
 执行该命令后,您将进入到Docker容器的交互式终端,并映射了一些端口用于可能会使用的服务,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
 您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[MinerU使用文档](../usage/index.md)。
 您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[MinerU使用文档](../usage/index.md)。
 
 
----
-
 ## 通过 Docker Compose 直接启动服务
 ## 通过 Docker Compose 直接启动服务
 
 
 我们提供了[compose.yml](https://github.com/opendatalab/MinerU/blob/master/docker/compose.yaml)文件,您可以通过它来快速启动MinerU服务。
 我们提供了[compose.yml](https://github.com/opendatalab/MinerU/blob/master/docker/compose.yaml)文件,您可以通过它来快速启动MinerU服务。

+ 0 - 12
docs/zh/usage/advanced_cli_parameters.md

@@ -1,7 +1,5 @@
 # 命令行参数进阶
 # 命令行参数进阶
 
 
----
-
 ## SGLang 加速参数优化
 ## SGLang 加速参数优化
 
 
 ### 显存优化参数
 ### 显存优化参数
@@ -11,8 +9,6 @@
 > - 如果您使用单张显卡遇到显存不足的情况时,可能需要调低KV缓存大小,`--mem-fraction-static 0.5`,如仍出现显存不足问题,可尝试进一步降低到`0.4`或更低
 > - 如果您使用单张显卡遇到显存不足的情况时,可能需要调低KV缓存大小,`--mem-fraction-static 0.5`,如仍出现显存不足问题,可尝试进一步降低到`0.4`或更低
 > - 如您有两张以上显卡,可尝试通过张量并行(TP)模式简单扩充可用显存:`--tp-size 2`
 > - 如您有两张以上显卡,可尝试通过张量并行(TP)模式简单扩充可用显存:`--tp-size 2`
 
 
----
-
 ### 性能优化参数
 ### 性能优化参数
 > [!TIP]
 > [!TIP]
 > 如果您已经可以正常使用sglang对vlm模型进行加速推理,但仍然希望进一步提升推理速度,可以尝试以下参数:
 > 如果您已经可以正常使用sglang对vlm模型进行加速推理,但仍然希望进一步提升推理速度,可以尝试以下参数:
@@ -20,15 +16,11 @@
 > - 如果您有超过多张显卡,可以使用sglang的多卡并行模式来增加吞吐量:`--dp-size 2`
 > - 如果您有超过多张显卡,可以使用sglang的多卡并行模式来增加吞吐量:`--dp-size 2`
 > - 同时您可以启用`torch.compile`来将推理速度加速约15%:`--enable-torch-compile`
 > - 同时您可以启用`torch.compile`来将推理速度加速约15%:`--enable-torch-compile`
 
 
----
-
 ### 参数传递说明
 ### 参数传递说明
 > [!TIP]
 > [!TIP]
 > - 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`
 > - 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`
 > - 如果您想了解更多有关`sglang`的参数使用方法,请参考 [sglang官方文档](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 > - 如果您想了解更多有关`sglang`的参数使用方法,请参考 [sglang官方文档](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
 
 
----
-
 ## GPU 设备选择与配置
 ## GPU 设备选择与配置
 
 
 ### CUDA_VISIBLE_DEVICES 基本用法
 ### CUDA_VISIBLE_DEVICES 基本用法
@@ -39,8 +31,6 @@
 >   ```
 >   ```
 > - 这种指定方式对所有的命令行调用都有效,包括 `mineru`、`mineru-sglang-server`、`mineru-gradio` 和 `mineru-api`,且对`pipeline`、`vlm`后端均适用。
 > - 这种指定方式对所有的命令行调用都有效,包括 `mineru`、`mineru-sglang-server`、`mineru-gradio` 和 `mineru-api`,且对`pipeline`、`vlm`后端均适用。
 
 
----
-
 ### 常见设备配置示例
 ### 常见设备配置示例
 > [!TIP]
 > [!TIP]
 > 以下是一些常见的 `CUDA_VISIBLE_DEVICES` 设置示例:
 > 以下是一些常见的 `CUDA_VISIBLE_DEVICES` 设置示例:
@@ -52,8 +42,6 @@
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   CUDA_VISIBLE_DEVICES=""  # No GPU will be visible
 >   ```
 >   ```
 
 
----
-
 ## 实际应用场景
 ## 实际应用场景
 
 
 > [!TIP]
 > [!TIP]

+ 15 - 22
docs/zh/usage/cli_tools.md

@@ -31,33 +31,28 @@ mineru-api --help
 Usage: mineru-api [OPTIONS]
 Usage: mineru-api [OPTIONS]
 
 
 Options:
 Options:
-  --host TEXT     Server host (default: 127.0.0.1)
-  --port INTEGER  Server port (default: 8000)
-  --reload        Enable auto-reload (development mode)
-  --help          Show this message and exit.
+  --host TEXT     服务器主机地址(默认:127.0.0.1)
+  --port INTEGER  服务器端口(默认:8000)
+  --reload        启用自动重载(开发模式)
+  --help          显示此帮助信息并退出
 ```
 ```
 ```bash
 ```bash
 mineru-gradio --help
 mineru-gradio --help
 Usage: mineru-gradio [OPTIONS]
 Usage: mineru-gradio [OPTIONS]
 
 
 Options:
 Options:
-  --enable-example BOOLEAN        Enable example files for input.The example
-                                  files to be input need to be placed in the
-                                  `example` folder within the directory where
-                                  the command is currently executed.
-  --enable-sglang-engine BOOLEAN  Enable SgLang engine backend for faster
-                                  processing.
-  --enable-api BOOLEAN            Enable gradio API for serving the
-                                  application.
-  --max-convert-pages INTEGER     Set the maximum number of pages to convert
-                                  from PDF to Markdown.
-  --server-name TEXT              Set the server name for the Gradio app.
-  --server-port INTEGER           Set the server port for the Gradio app.
+  --enable-example BOOLEAN        启用示例文件输入(需要将示例文件放置在当前
+                                  执行命令目录下的 `example` 文件夹中)
+  --enable-sglang-engine BOOLEAN  启用 SgLang 引擎后端以提高处理速度
+  --enable-api BOOLEAN            启用 Gradio API 以提供应用程序服务
+  --max-convert-pages INTEGER     设置从 PDF 转换为 Markdown 的最大页数
+  --server-name TEXT              设置 Gradio 应用程序的服务器主机名
+  --server-port INTEGER           设置 Gradio 应用程序的服务器端口
   --latex-delimiters-type [a|b|all]
   --latex-delimiters-type [a|b|all]
-                                  Set the type of LaTeX delimiters to use in
-                                  Markdown rendering:'a' for type '$', 'b' for
-                                  type '()[]', 'all' for both types.
-  --help                          Show this message and exit.
+                                  设置在 Markdown 渲染中使用的 LaTeX 分隔符类型
+                                  ('a' 表示 '$' 类型,'b' 表示 '()[]' 类型,
+                                  'all' 表示两种类型都使用)
+  --help                          显示此帮助信息并退出
 ```
 ```
 
 
 ## 环境变量说明
 ## 环境变量说明
@@ -71,5 +66,3 @@ MinerU命令行工具的某些参数存在相同功能的环境变量配置,
 - `MINERU_TOOLS_CONFIG_JSON`:用于指定配置文件路径,默认为用户目录下的`mineru.json`,可通过环境变量指定其他配置文件路径。
 - `MINERU_TOOLS_CONFIG_JSON`:用于指定配置文件路径,默认为用户目录下的`mineru.json`,可通过环境变量指定其他配置文件路径。
 - `MINERU_FORMULA_ENABLE`:用于启用公式解析,默认为`true`,可通过环境变量设置为`false`来禁用公式解析。
 - `MINERU_FORMULA_ENABLE`:用于启用公式解析,默认为`true`,可通过环境变量设置为`false`来禁用公式解析。
 - `MINERU_TABLE_ENABLE`:用于启用表格解析,默认为`true`,可通过环境变量设置为`false`来禁用表格解析。
 - `MINERU_TABLE_ENABLE`:用于启用表格解析,默认为`true`,可通过环境变量设置为`false`来禁用表格解析。
-
-

+ 0 - 7
docs/zh/usage/quick_usage.md

@@ -7,8 +7,6 @@ export MINERU_MODEL_SOURCE=modelscope
 ```
 ```
 有关模型源配置和自定义本地模型路径的更多信息,请参考文档中的[模型源说明](./model_source.md)。
 有关模型源配置和自定义本地模型路径的更多信息,请参考文档中的[模型源说明](./model_source.md)。
 
 
----
-
 ## 通过命令行快速使用
 ## 通过命令行快速使用
 MinerU内置了命令行工具,用户可以通过命令行快速使用MinerU进行PDF解析:
 MinerU内置了命令行工具,用户可以通过命令行快速使用MinerU进行PDF解析:
 ```bash
 ```bash
@@ -25,7 +23,6 @@ mineru -p <input_path> -o <output_path>
 > 命令行工具会在Linux和macOS系统自动尝试cuda/mps加速。Windows用户如需使用cuda加速,
 > 命令行工具会在Linux和macOS系统自动尝试cuda/mps加速。Windows用户如需使用cuda加速,
 > 请前往 [Pytorch官网](https://pytorch.org/get-started/locally/) 选择适合自己cuda版本的命令安装支持加速的`torch`和`torchvision`。
 > 请前往 [Pytorch官网](https://pytorch.org/get-started/locally/) 选择适合自己cuda版本的命令安装支持加速的`torch`和`torchvision`。
 
 
-
 ```bash
 ```bash
 # 或指定vlm后端解析
 # 或指定vlm后端解析
 mineru -p <input_path> -o <output_path> -b vlm-transformers
 mineru -p <input_path> -o <output_path> -b vlm-transformers
@@ -35,8 +32,6 @@ mineru -p <input_path> -o <output_path> -b vlm-transformers
 
 
 如果需要通过自定义参数调整解析选项,您也可以在文档中查看更详细的[命令行工具使用说明](./cli_tools.md)。
 如果需要通过自定义参数调整解析选项,您也可以在文档中查看更详细的[命令行工具使用说明](./cli_tools.md)。
 
 
----
-
 ## 通过api、webui、sglang-client/server进阶使用
 ## 通过api、webui、sglang-client/server进阶使用
 
 
 - 通过python api直接调用:[Python 调用示例](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
 - 通过python api直接调用:[Python 调用示例](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
@@ -72,8 +67,6 @@ mineru -p <input_path> -o <output_path> -b vlm-transformers
 > 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`,
 > 所有sglang官方支持的参数都可用通过命令行参数传递给 MinerU,包括以下命令:`mineru`、`mineru-sglang-server`、`mineru-gradio`、`mineru-api`,
 > 我们整理了一些`sglang`使用中的常用参数和使用方法,可以在文档[命令行进阶参数](./advanced_cli_parameters.md)中获取。
 > 我们整理了一些`sglang`使用中的常用参数和使用方法,可以在文档[命令行进阶参数](./advanced_cli_parameters.md)中获取。
 
 
----
-
 ## 基于配置文件扩展 MinerU 功能
 ## 基于配置文件扩展 MinerU 功能
 
 
 MinerU 现已实现开箱即用,但也支持通过配置文件扩展功能。您可通过编辑用户目录下的 `mineru.json` 文件,添加自定义配置。
 MinerU 现已实现开箱即用,但也支持通过配置文件扩展功能。您可通过编辑用户目录下的 `mineru.json` 文件,添加自定义配置。