-
Notifications
You must be signed in to change notification settings - Fork 3.1k
[megatron, training_utils] fix: router replay R3 align router replay data with global layer indices #5037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses an issue in the router replay logic by aligning replay data with global layer indices when appropriate. The change introduces a conditional logic to use the global layer_number from router instances and falls back to local offset indexing otherwise, which aligns with the stated goal. The addition of a bounds check for the layer index is a good defensive measure. I've included one suggestion to make the logic more robust by failing fast if a router is missing the layer_number attribute when global indexing is expected, which would prevent silent errors from potential misconfigurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adjusts how router replay (R3) maps recorded routing indices back to router instances in Megatron, aiming to use global layer indices when routed‑experts tensors span all transformer layers and otherwise fall back to local offsets. The goal is to correctly align replay data for architectures like DeepSeek V3 where routers and layers may not map 1:1 by simple local offsets.
Changes:
- Compute
num_layers_in_datafrom the replay tensor and detect whether it matchestf_config.num_layersto decide between global and local layer indexing. - For each router instance, determine a
layer_idxeither from alayer_numberattribute (global index path) or from the originali + offsetscheme (local index path). - Add explicit bounds checking on
layer_idxand raise aValueErrorif the computed index is outside the replay data’s layer dimension.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a fix for router replay in the DeepSeek V3 architecture, aligning router replay data with global layer indices when the data encompasses all layers, and correctly falling back to local offset indexing otherwise. The changes involve patching TopKRouter to store the global layer index and updating the data loading logic in set_router_replay_data to use this global index when appropriate. The implementation is sound, includes necessary fallbacks for robustness, and adds a validation check for layer indices. The code appears to correctly address the issue described, and I did not find any issues of high or critical severity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…data with global layer indices This PR fixes DeepSeek V3 architecture for router replay R3, as when routed‑experts tensors include dense layers (full `num_layers`), we should map replay data by each router’s global layer_number; Otherwise, we should fall back to local offset indexing and validate bounds to catch mismatches. Signed-off-by: Hollow Man <hollowman@opensuse.org>
What does this PR do?
DeepSeek-V3-style MoE employs a hybrid architecture with the first three layers as dense FFN blocks before switching to MoE layers, which means not every layer has a router.
This PR fixes DeepSeek V3 architecture for router replay R3, as vLLM reports routed_experts across all transformer layers (including dense). Megatron only has routers for MoE layers. Mapping with i + offset silently shifts every MoE layer after a dense layer. So, when routed‑experts tensors include dense layers (full
num_layers), we should map replay data by each router’s global layer_number; Otherwise, we should fall back to local offset indexing and validate bounds to catch mismatches. We also patch TopKRouter.set_layer_number to store the global layer number in each RouterReplay instance so global alignment is reliable with VPP/PP.Dependent on vllm-project/vllm#33013
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,veomni,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,cfg,reward,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
Without this fix:

With this fix, it looks good now:

API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)recipesubmodule, please also update the reference to the submodule commit viagit submodule update --remoteorcd recipe && git pull origin main.✨ Presented to you with Mind Lab - A Lab for Experiential Intelligence.