|
|
@@ -1028,7 +1028,7 @@ We highly recommend using vLLM for deployment and inference. All of our evaluati
|
|
|
|
|
|
```shell
|
|
|
# Launch vLLM model server
|
|
|
-vllm serve rednote-hilab/dots.ocr --trust-remote-code --async-scheduling
|
|
|
+vllm serve rednote-hilab/dots.ocr --trust-remote-code --async-scheduling --gpu-memory-utilization 0.95
|
|
|
|
|
|
# vLLM API Demo
|
|
|
# See demo/demo_vllm.py for details on parameter and prompt settings that help achieve the best output quality.
|