-
Notifications
You must be signed in to change notification settings - Fork 2k
feat: Add AdaptiveKMoeRoutingMethod for entropy-based dynamic expert … #10672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…selection Adds entropy-based adaptive K selection for MoE routing that dynamically selects the number of experts based on routing confidence (entropy): - Low entropy (confident) -> fewer experts -> save compute - High entropy (uncertain) -> more experts -> maintain quality Validated results: - Mixtral 8x7B: 52.5% compute reduction - Qwen-MoE: 32.4% compute reduction - OLMoE-1B-7B: 24.7% compute reduction All with <0.5% perplexity impact. Reference: 'Entropy-Guided Dynamic Expert Selection in MoE Models' Author: Gabriele Balsamo ([email protected])
📝 WalkthroughWalkthroughA new adaptive routing variant Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/modules/fused_moe/routing.py (1)
1-11: Missing NVIDIA copyright header.Per coding guidelines, all TensorRT-LLM source files should contain an NVIDIA copyright header with the year of latest meaningful modification. This file is missing the required header.
Proposed fix
Add the following at the top of the file:
# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: Apache-2.0 # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.As per coding guidelines, all source files require an NVIDIA copyright header.
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/modules/fused_moe/routing.py`:
- Around line 770-772: AdaptiveKMoeRoutingMethod lacks the top_k attribute
required by BaseMoeRoutingMethod.get_experts_per_token(), causing an
AttributeError when experts_per_token is accessed; fix by initializing top_k in
AdaptiveKMoeRoutingMethod.__init__ (e.g., set self.top_k = k_max or appropriate
k value) or override get_experts_per_token in AdaptiveKMoeRoutingMethod to
return the correct experts-per-token logic, and ensure references to
RoutingMethodType.AdaptiveK and routing_method_type remain unchanged.
- Around line 740-745: The stats update in _update_stats is not synchronized and
can race when apply() runs concurrently; fix by adding a threading.Lock instance
(e.g., self._stats_lock) initialized in the module/class constructor and
guarding the modifications to _k_counts, _total_tokens, and _entropy_sum with
the lock (use a with self._stats_lock: block around the existing body of
_update_stats); alternatively, make stats collection opt-in by adding a boolean
flag (e.g., self._collect_stats) checked in apply() and _update_stats so callers
can disable stats in multithreaded contexts.
- Line 654: The parameter annotation for entropy_thresholds uses plain "list"
with a None default; change it to an explicit nullable type (e.g.,
Optional[list] or "list | None") and keep the default as None; update the
function/method signature where entropy_thresholds appears (routing.py, the
function taking entropy_thresholds) and add the corresponding import (from
typing import Optional) if using Optional[list]. Ensure you do not use a mutable
default and only change the type hint to explicitly allow None.
- Around line 650-675: The constructor for the adaptive routing class
incorrectly calls super().__init__ with arguments; BaseMoeRoutingMethod.__init__
accepts no parameters so remove the arguments and call super().__init__() (or
drop the call if not needed), leaving the rest of the initialization intact
(keep setting self.k_min, self.k_max, self.k_values, self.entropy_thresholds,
self.output_dtype) so instantiation no longer raises a TypeError.
- Around line 160-163: The Python enum RoutingMethodType currently defines
AdaptiveK = 7 before Unspecified = 6, breaking numeric order and desyncing with
the C++ RoutingMethodType in runner.h; either remove AdaptiveK from the Python
enum or add AdaptiveK with the same numeric value to the C++ enum in runner.h so
both definitions match, and then reorder the Python entries so numeric values
are ascending (ensure Unspecified = 6 then AdaptiveK = 7 if you keep it) and
update the file comment that references runner.h to reflect the synchronized
state.
🧹 Nitpick comments (3)
tensorrt_llm/_torch/modules/fused_moe/routing.py (3)
687-692:torch.wherewith mismatched dtypes may cause issues.
kis initialized asint32, butself.k_values[i]is a Pythonint. While PyTorch handles this, explicitly casting ensures consistency. Also, iterating in reverse through thresholds withtorch.whereoverwrites values—consider if this matches the intended threshold logic.Suggested improvement for clarity
def select_k_per_token(self, entropy: torch.Tensor) -> torch.Tensor: """Select K value for each token based on entropy thresholds.""" - k = torch.full_like(entropy, self.k_values[-1], dtype=torch.int32) + device = entropy.device + k = torch.full((entropy.shape[0],), self.k_values[-1], dtype=torch.int32, device=device) for i in range(len(self.entropy_thresholds) - 1, -1, -1): - k = torch.where(entropy < self.entropy_thresholds[i], self.k_values[i], k) + k = torch.where( + entropy < self.entropy_thresholds[i], + torch.tensor(self.k_values[i], dtype=torch.int32, device=device), + k + ) return k
709-709: Unused variables from tuple unpacking.
num_tokensandnum_expertsare unpacked but never used, as flagged by static analysis. Prefix with underscores to indicate intentional disuse.Proposed fix
- num_tokens, num_experts = router_logits.shape + _num_tokens, _num_experts = router_logits.shape device = router_logits.device
740-745: Statistics update on every forward pass adds overhead.Calling
.sum().item()for each K value forces GPU-CPU synchronization on every token batch, which can significantly impact inference latency. Consider making stats collection conditional or batched.Proposed fix: Make stats collection opt-in
def __init__( self, k_min: int = 2, k_max: int = 8, entropy_thresholds: Optional[List[float]] = None, output_dtype: torch.dtype = torch.float32, + collect_stats: bool = False, ): ... + self.collect_stats = collect_stats # Statistics tracking self._k_counts = {k: 0 for k in self.k_values} self._total_tokens = 0 self._entropy_sum = 0.0 def apply(self, router_logits: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: ... # Update statistics (for monitoring) - self._update_stats(k_per_token, entropy) + if self.collect_stats: + self._update_stats(k_per_token, entropy) return topk_indices.to(torch.int32), normalized_values.to(self.output_dtype)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tensorrt_llm/_torch/modules/fused_moe/routing.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g.,some_file.py)
Python classes should use PascalCase (e.g.,class SomeClass)
Python functions and methods should use snake_case (e.g.,def my_awesome_function():)
Python local variables should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL)
Python constants should use upper snake_case (e.g.,MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format"""<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic
Files:
tensorrt_llm/_torch/modules/fused_moe/routing.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification
Files:
tensorrt_llm/_torch/modules/fused_moe/routing.py
🧠 Learnings (1)
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/routing.py
🪛 Ruff (0.14.11)
tensorrt_llm/_torch/modules/fused_moe/routing.py
654-654: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
709-709: Unpacked variable num_tokens is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
709-709: Unpacked variable num_experts is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🔇 Additional comments (1)
tensorrt_llm/_torch/modules/fused_moe/routing.py (1)
791-792: Registry mapping added correctly.The
AdaptiveKMoeRoutingMethodis properly registered inROUTING_METHOD_TYPE_TO_CLASS.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| # AdaptiveK: Entropy-based dynamic K selection | ||
| AdaptiveK = 7, | ||
| # Unspecified | ||
| Unspecified = 6, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find the Python file
find . -name "routing.py" -path "*/fused_moe/*" | head -5Repository: NVIDIA/TensorRT-LLM
Length of output: 113
🏁 Script executed:
#!/bin/bash
# Read the Python file around lines 140-170 to see the enum and the synchronization comment
sed -n '140,170p' ./tensorrt_llm/_torch/modules/fused_moe/routing.pyRepository: NVIDIA/TensorRT-LLM
Length of output: 1113
🏁 Script executed:
#!/bin/bash
# Now search for the C++ enum definition in runner.h
find . -name "runner.h" -path "*blockScaleMoe*" | head -5Repository: NVIDIA/TensorRT-LLM
Length of output: 129
🏁 Script executed:
#!/bin/bash
# Read the C++ enum definition from runner.h
rg -n "enum class RoutingMethodType" -A 25 ./cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.hRepository: NVIDIA/TensorRT-LLM
Length of output: 1107
🏁 Script executed:
#!/bin/bash
# Search for any mention of AdaptiveK in the C++ codebase
rg -n "AdaptiveK" ./cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/ -A 2 -B 2Repository: NVIDIA/TensorRT-LLM
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Search for AdaptiveK in the entire C++ codebase
rg -n "AdaptiveK" ./cpp/ | head -20Repository: NVIDIA/TensorRT-LLM
Length of output: 45
Enum value ordering is inconsistent and breaks C++ synchronization.
AdaptiveK = 7 is declared before Unspecified = 6, violating numerical order. More critically, the comment at line 146 requires this enum to stay in sync with the C++ RoutingMethodType in runner.h, but the C++ enum does not include AdaptiveK at all. Either remove AdaptiveK from the Python enum or add it to the C++ counterpart with the corresponding value.
Proposed fix
- # AdaptiveK: Entropy-based dynamic K selection
- AdaptiveK = 7,
# Unspecified
Unspecified = 6,
+ # AdaptiveK: Entropy-based dynamic K selection
+ AdaptiveK = 7,🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/modules/fused_moe/routing.py` around lines 160 - 163, The
Python enum RoutingMethodType currently defines AdaptiveK = 7 before Unspecified
= 6, breaking numeric order and desyncing with the C++ RoutingMethodType in
runner.h; either remove AdaptiveK from the Python enum or add AdaptiveK with the
same numeric value to the C++ enum in runner.h so both definitions match, and
then reorder the Python entries so numeric values are ascending (ensure
Unspecified = 6 then AdaptiveK = 7 if you keep it) and update the file comment
that references runner.h to reflect the synchronized state.
| def __init__( | ||
| self, | ||
| k_min: int = 2, | ||
| k_max: int = 8, | ||
| entropy_thresholds: list = None, | ||
| output_dtype: torch.dtype = torch.float32, | ||
| ): | ||
| """ | ||
| Initialize Adaptive-K routing. | ||
| Args: | ||
| k_min: Minimum experts to use (for confident/low-entropy routing) | ||
| k_max: Maximum experts to use (for uncertain/high-entropy routing) | ||
| entropy_thresholds: List of thresholds for K selection. | ||
| E.g., [1.3, 1.7] with k_values=[k_min, k_mid, k_max] means: | ||
| H < 1.3 -> K=k_min | ||
| 1.3 <= H < 1.7 -> K=k_mid | ||
| H >= 1.7 -> K=k_max | ||
| output_dtype: Output dtype for routing weights | ||
| """ | ||
| super().__init__(k_max, output_dtype) | ||
| self.k_min = k_min | ||
| self.k_max = k_max | ||
| self.k_values = [k_min, (k_min + k_max) // 2, k_max] | ||
| self.entropy_thresholds = entropy_thresholds or [1.3, 1.7] | ||
| self.output_dtype = output_dtype |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Constructor calls super().__init__ with incorrect arguments.
BaseMoeRoutingMethod.__init__ takes no arguments, but line 670 passes (k_max, output_dtype). This will cause a TypeError at instantiation.
Proposed fix
- super().__init__(k_max, output_dtype)
+ super().__init__()
self.k_min = k_min
self.k_max = k_max
+ self.top_k = k_max # Required for BaseMoeRoutingMethod.get_experts_per_token()🧰 Tools
🪛 Ruff (0.14.11)
654-654: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/modules/fused_moe/routing.py` around lines 650 - 675, The
constructor for the adaptive routing class incorrectly calls super().__init__
with arguments; BaseMoeRoutingMethod.__init__ accepts no parameters so remove
the arguments and call super().__init__() (or drop the call if not needed),
leaving the rest of the initialization intact (keep setting self.k_min,
self.k_max, self.k_values, self.entropy_thresholds, self.output_dtype) so
instantiation no longer raises a TypeError.
| self, | ||
| k_min: int = 2, | ||
| k_max: int = 8, | ||
| entropy_thresholds: list = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use explicit Optional type hint instead of implicit None default.
The entropy_thresholds parameter has a mutable default (list) with None as the actual default value. The type hint list is incorrect—it should be Optional[list] or list | None.
Proposed fix
+from typing import Callable, Dict, List, Optional, Type
...
def __init__(
self,
k_min: int = 2,
k_max: int = 8,
- entropy_thresholds: list = None,
+ entropy_thresholds: Optional[List[float]] = None,
output_dtype: torch.dtype = torch.float32,
):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| entropy_thresholds: list = None, | |
| from typing import Callable, Dict, List, Optional, Type | |
| def __init__( | |
| self, | |
| k_min: int = 2, | |
| k_max: int = 8, | |
| entropy_thresholds: Optional[List[float]] = None, | |
| output_dtype: torch.dtype = torch.float32, | |
| ): |
🧰 Tools
🪛 Ruff (0.14.11)
654-654: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/modules/fused_moe/routing.py` at line 654, The parameter
annotation for entropy_thresholds uses plain "list" with a None default; change
it to an explicit nullable type (e.g., Optional[list] or "list | None") and keep
the default as None; update the function/method signature where
entropy_thresholds appears (routing.py, the function taking entropy_thresholds)
and add the corresponding import (from typing import Optional) if using
Optional[list]. Ensure you do not use a mutable default and only change the type
hint to explicitly allow None.
| def _update_stats(self, k_per_token: torch.Tensor, entropy: torch.Tensor): | ||
| """Update internal statistics for monitoring compute savings.""" | ||
| for k in self.k_values: | ||
| self._k_counts[k] += (k_per_token == k).sum().item() | ||
| self._total_tokens += k_per_token.numel() | ||
| self._entropy_sum += entropy.sum().item() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Statistics tracking is not thread-safe and may cause race conditions.
_update_stats modifies _k_counts, _total_tokens, and _entropy_sum without synchronization. In multi-threaded inference scenarios, concurrent calls to apply() will corrupt these counters.
Consider either:
- Using
threading.Lockto protect updates - Using atomic operations
- Documenting that stats are approximate in concurrent scenarios
- Making stats collection opt-in via a flag
Option 1: Add thread-safe locking
+import threading
...
def __init__(
self,
k_min: int = 2,
k_max: int = 8,
entropy_thresholds: Optional[List[float]] = None,
output_dtype: torch.dtype = torch.float32,
):
...
# Statistics tracking
self._k_counts = {k: 0 for k in self.k_values}
self._total_tokens = 0
self._entropy_sum = 0.0
+ self._stats_lock = threading.Lock()
def _update_stats(self, k_per_token: torch.Tensor, entropy: torch.Tensor):
"""Update internal statistics for monitoring compute savings."""
+ with self._stats_lock:
- for k in self.k_values:
- self._k_counts[k] += (k_per_token == k).sum().item()
- self._total_tokens += k_per_token.numel()
- self._entropy_sum += entropy.sum().item()
+ for k in self.k_values:
+ self._k_counts[k] += (k_per_token == k).sum().item()
+ self._total_tokens += k_per_token.numel()
+ self._entropy_sum += entropy.sum().item()🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/modules/fused_moe/routing.py` around lines 740 - 745, The
stats update in _update_stats is not synchronized and can race when apply() runs
concurrently; fix by adding a threading.Lock instance (e.g., self._stats_lock)
initialized in the module/class constructor and guarding the modifications to
_k_counts, _total_tokens, and _entropy_sum with the lock (use a with
self._stats_lock: block around the existing body of _update_stats);
alternatively, make stats collection opt-in by adding a boolean flag (e.g.,
self._collect_stats) checked in apply() and _update_stats so callers can disable
stats in multithreaded contexts.
| @property | ||
| def routing_method_type(self) -> RoutingMethodType: | ||
| return RoutingMethodType.AdaptiveK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing top_k property required by BaseMoeRoutingMethod.get_experts_per_token().
BaseMoeRoutingMethod.get_experts_per_token() returns self.top_k, but AdaptiveKMoeRoutingMethod does not define top_k. This will raise AttributeError when experts_per_token is accessed.
Proposed fix
Either add self.top_k = k_max in __init__, or override get_experts_per_token:
+ def get_experts_per_token(self) -> int:
+ """Returns k_max since output shape is fixed to k_max experts per token."""
+ return self.k_max
+
`@property`
def routing_method_type(self) -> RoutingMethodType:
return RoutingMethodType.AdaptiveK🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/modules/fused_moe/routing.py` around lines 770 - 772,
AdaptiveKMoeRoutingMethod lacks the top_k attribute required by
BaseMoeRoutingMethod.get_experts_per_token(), causing an AttributeError when
experts_per_token is accessed; fix by initializing top_k in
AdaptiveKMoeRoutingMethod.__init__ (e.g., set self.top_k = k_max or appropriate
k value) or override get_experts_per_token in AdaptiveKMoeRoutingMethod to
return the correct experts-per-token logic, and ensure references to
RoutingMethodType.AdaptiveK and routing_method_type remain unchanged.
- Fix enum ordering (Unspecified=6 before AdaptiveK=7) - Fix super().__init__() call without args - Add self.top_k for BaseMoeRoutingMethod compatibility - Use Optional[List[float]] type hint - Add thread-safe stats with threading.Lock - Make stats collection configurable via _collect_stats flag
…selection
Adds entropy-based adaptive K selection for MoE routing that dynamically selects the number of experts based on routing confidence (entropy):
Validated results:
All with <0.5% perplexity impact.
Reference: 'Entropy-Guided Dynamic Expert Selection in MoE Models'
Author: Gabriele Balsamo ([email protected])
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.