Skip to main content
This guide walks you through setting up your development environment for contributing to vLLM.

Clone the repository

The first step is to clone the GitHub repository:
git clone https://github.com/vllm-project/vllm.git
cd vllm

Python environment setup

vLLM is compatible with Python versions 3.10 to 3.13. However, vLLM’s default Dockerfile ships with Python 3.12 and tests in CI (except mypy) are run with Python 3.12.
We recommend developing with Python 3.12 to minimize the chance of your local environment clashing with our CI environment.
Configure your Python virtual environment:
1

Create virtual environment

Create and activate a Python virtual environment:
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
2

Install uv (recommended)

Install uv for faster package management:
pip install uv

Installation options

Python-only development

If you are only developing vLLM’s Python code, install vLLM using:
VLLM_USE_PRECOMPILED=1 uv pip install -e .
This uses pre-compiled binaries and skips the compilation of C++/CUDA code, which is much faster.

Python and CUDA/C++ development

If you are developing vLLM’s Python and CUDA/C++ code:
1

Install PyTorch

Install PyTorch first:
uv pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu129
2

Install build dependencies

Install the necessary build dependencies, skipping torch as it was installed in the previous step:
grep -v '^torch==' requirements/build.txt | uv pip install -r -
3

Install vLLM from source

Install vLLM without build isolation:
uv pip install -e . --no-build-isolation
If any of the above commands fails with Python.h: No such file or directory, install python3-dev with sudo apt install python3-dev.

Installation for other hardware

For more details about installing from source and installing for other hardware platforms (AMD, Intel, etc.), check out the installation instructions for your hardware and head to the “Build wheel from source” section.

Incremental compilation workflow

For an optimized workflow when iterating on C++/CUDA kernels, see the Incremental Compilation Workflow for recommendations.

Setting up linting

vLLM uses pre-commit to lint and format the codebase.
1

Install pre-commit

uv pip install pre-commit
2

Install Git hooks

pre-commit install
vLLM’s pre-commit hooks will now run automatically every time you commit.

Manual pre-commit usage

You can manually run the pre-commit hooks using:
pre-commit run     # runs on staged files
pre-commit run -a  # runs on all files (short for --all-files)

CI-only hooks

Some pre-commit hooks only run in CI. If you need to run them locally:
pre-commit run --hook-stage manual markdownlint
pre-commit run --hook-stage manual mypy-3.10

Documentation setup

vLLM uses MkDocs for documentation. To preview documentation locally:
1

Install documentation dependencies

uv pip install -r requirements/docs.txt
Ensure your Python version is compatible with the plugins (e.g., mkdocs-awesome-nav requires Python 3.10+).
2

Run the development server

mkdocs serve                           # with API ref (~10 minutes)
API_AUTONAV_EXCLUDE=vllm mkdocs serve  # API ref off (~15 seconds)
3

Preview in browser

Once you see Serving on http://127.0.0.1:8000/ in the logs, open http://127.0.0.1:8000/ in your browser.
For additional features and advanced configurations, refer to:

Testing setup

vLLM uses pytest to test the codebase.

Install test dependencies

# Install the test dependencies used in CI (CUDA only)
uv pip install -r requirements/common.txt -r requirements/dev.txt --torch-backend=auto

# Install some common test dependencies (hardware agnostic)
uv pip install pytest pytest-asyncio

Running tests

# Run all tests
pytest tests/

# Run tests for a single test file with detailed output
pytest -s -v tests/test_logger.py
Known limitations:
  • The repository is not fully checked by mypy
  • Not all unit tests pass when run on CPU platforms. If you don’t have access to a GPU platform to run unit tests locally, rely on the continuous integration system

Verify your setup

To verify your development environment is set up correctly:
# Import vLLM in Python
python -c "import vllm; print(vllm.__version__)"

# Run a simple test
pytest tests/test_logger.py -v

Next steps

Testing guide

Learn how to write and run tests

Adding models

Implement new model architectures

Build docs developers (and LLMs) love