# Quick Start If you encounter any installation issues, please check the [FAQ](../faq/index.md) first. ## Online Experience ### Official online web application The official online version has the same functionality as the client, with a beautiful interface and rich features, requires login to use - [](https://mineru.net/OpenSourceTools/Extractor?source=github) ### Gradio-based online demo A WebUI developed based on Gradio, with a simple interface and only core parsing functionality, no login required - [](https://www.modelscope.cn/studios/OpenDataLab/MinerU) - [](https://huggingface.co/spaces/opendatalab/MinerU) ## Local Deployment > [!WARNING] > **Prerequisites - Hardware and Software Environment Support** > > To ensure the stability and reliability of the project, we have optimized and tested only specific hardware and software environments during development. This ensures that users can achieve optimal performance and encounter the fewest compatibility issues when deploying and running the project on recommended system configurations. > > By concentrating our resources and efforts on mainstream environments, our team can more efficiently resolve potential bugs and timely develop new features. > > In non-mainstream environments, due to the diversity of hardware and software configurations, as well as compatibility issues with third-party dependencies, we cannot guarantee 100% usability of the project. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first, as most issues have corresponding solutions in the FAQ. Additionally, we encourage community feedback on issues so that we can gradually expand our support range.
| Parsing Backend | pipeline (Accuracy1 82+) |
vlm (Accuracy1 90+) | |||
|---|---|---|---|---|---|
| transformers | mlx-engine | vllm-engine / vllm-async-engine |
http-client | ||
| Backend Features | Fast, no hallucinations | Good compatibility, but slower |
Faster than transformers | Fast, compatible with the vLLM ecosystem | Suitable for OpenAI-compatible servers5 |
| Operating System | Linux2 / Windows / macOS | macOS3 | Linux2 / Windows4 | Any | |
| CPU inference support | ✅ | ❌ | Not required | ||
| GPU Requirements | Volta or later architectures, 6 GB VRAM or more, or Apple Silicon | Apple Silicon | Volta or later architectures, 8 GB VRAM or more | Not required | |
| Memory Requirements | Minimum 16 GB, 32 GB recommended | 8 GB | |||
| Disk Space Requirements | 20 GB or more, SSD recommended | 2 GB | |||
| Python Version | 3.10-3.13 | ||||