Use uv auto torch backend for vllm 0.20
This commit is contained in:
@@ -19,7 +19,9 @@ Both specs start from the same base vLLM configuration. The base contains only s
|
||||
|
||||
PyPI reports `vllm==0.20.0` as the current community release checked on 2026-05-02. The dash0 runtime venv is on local rootfs rather than CPFS, because installing torch/CUDA wheels into CPFS was I/O-bound:
|
||||
|
||||
`/tmp/wjh/venvs/vllm-0.20.0`
|
||||
`/tmp/wjh/venvs/vllm-0.20.0-auto`
|
||||
|
||||
The first plain `pip install vllm==0.20.0` smoke pulled `torch 2.11.0+cu130` and failed on dash0's driver (`570.133.20`, CUDA 12.9). The active install uses the vLLM-documented `uv pip install vllm==0.20.0 --torch-backend=auto` path so uv selects a CUDA backend compatible with the installed driver.
|
||||
|
||||
Install log:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user