Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 73.6k 14.5k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.9k 443

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 504 171

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 281 55

  5. semantic-router semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    Go 3.5k 577

  6. vllm-omni vllm-omni Public

    A framework for efficient model inference with omni-modality models

    Python 3.2k 557

Repositories

Showing 10 of 34 repositories
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 32 Apache-2.0 115 1 66 Updated Mar 19, 2026
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 264 Apache-2.0 127 48 (3 issues need help) 158 Updated Mar 19, 2026
  • semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    vllm-project/semantic-router’s past year of commit activity
    Go 3,459 Apache-2.0 577 96 (13 issues need help) 86 Updated Mar 19, 2026
  • vllm-omni Public

    A framework for efficient model inference with omni-modality models

    vllm-project/vllm-omni’s past year of commit activity
    Python 3,208 Apache-2.0 556 303 (62 issues need help) 200 Updated Mar 19, 2026
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    C++ 1,803 Apache-2.0 954 1,162 (7 issues need help) 343 Updated Mar 19, 2026
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 73,626 Apache-2.0 14,534 1,714 (45 issues need help) 2,074 Updated Mar 19, 2026
  • vllm-xpu-kernels Public

    The vLLM XPU kernels for Intel GPU

    vllm-project/vllm-xpu-kernels’s past year of commit activity
    C++ 23 Apache-2.0 33 7 23 Updated Mar 19, 2026
  • compressed-tensors Public

    A safetensors extension to efficiently store sparse quantized tensors on disk

    vllm-project/compressed-tensors’s past year of commit activity
    Python 266 Apache-2.0 69 6 (1 issue needs help) 16 Updated Mar 19, 2026
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 929 Apache-2.0 139 62 23 Updated Mar 19, 2026
  • vllm-metal Public

    Community maintained hardware plugin for vLLM on Apple Silicon

    vllm-project/vllm-metal’s past year of commit activity
    Python 692 Apache-2.0 72 11 (2 issues need help) 8 Updated Mar 19, 2026