chore: upgrade vLLM to 0.16.0 from GitHub releases#1893
Open
Conversation
- Update vLLM from 0.16.0rc3 (S3) to 0.16.0 (GitHub release) - Update flash-attn to torch 2.9 compatible build (cu129) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
vLLM 0.16.0 is now available on PyPI, so we can use the official package instead of the GitHub release wheel. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
rasdani
reviewed
Feb 26, 2026
Co-authored-by: rasdani <73563550+rasdani@users.noreply.github.com> Signed-off-by: samsja <55492238+samsja@users.noreply.github.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
This reverts commit 75c4c63.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Changes
Test plan
🤖 Generated with Claude Code
Note
Medium Risk
Upgrades core GPU/inference dependencies (
vllm,flash-attn, and the resolvedtorch/tritonstack), which can change runtime behavior and binary compatibility for training/inference despite no application code changes.Overview
Moves
vllmfrom a pinned pre-release S3 wheel (0.16.0rc3) to the official PyPI release (vllm>=0.16.0) and removes the customtool.uv.sourcesoverride for it.Updates the
flash-attnoptional dependency to a Torch 2.9/CUDA 12.9-compatible wheel, and refreshesuv.lockaccordingly (notably resolving totorch/torchaudio/torchvision2.9.1 andtriton3.5.1, plus related NVIDIA package version shifts).Written by Cursor Bugbot for commit 73e6873. This will update automatically on new commits. Configure here.