Skip to content

Conversation

@dhansen-nvidia
Copy link
Collaborator

@dhansen-nvidia dhansen-nvidia commented Jan 14, 2026

Summary by CodeRabbit

  • Documentation
    • Added comprehensive CPU affinity configuration guide to the deployment documentation, including details on NUMA-aware worker affinity behavior, environment variable configuration options, and step-by-step guidance for preventing OpenMPI and Slurm from overriding TensorRT-LLM's CPU affinity settings to ensure optimal multi-node performance.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@dhansen-nvidia dhansen-nvidia requested a review from a team as a code owner January 14, 2026 20:44
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 14, 2026

📝 Walkthrough

Walkthrough

A new documentation file explaining CPU affinity configuration in TensorRT-LLM, including NUMA-aware behavior, environment variables, and guidance for preventing OpenMPI and Slurm from overriding CPU affinity settings. Documentation index updated accordingly.

Changes

Cohort / File(s) Summary
Documentation (CPU Affinity)
docs/source/deployment-guide/configuring-cpu-affinity.md, docs/source/index.rst
New documentation file (+80 lines) detailing CPU affinity configuration, NUMA-aware worker affinity behavior controlled via TLLM_NUMA_AWARE_WORKER_AFFINITY environment variable, and best practices for preventing OpenMPI and Slurm from overriding CPU affinity. Index entry added to deployment guide toctree.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template with no actual content filled in—no explanation of the issue, solution, or test coverage is provided. Fill in the Description section explaining what CPU affinity configuration is documented and why, and add relevant test coverage information or indicate if none applies.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding documentation on CPU affinity configuration in TensorRT-LLM.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@docs/source/deployment-guide/configuring-cpu-affinity.md`:
- Around line 75-77: Add a language specifier to the fenced code block
containing the environment export so syntax highlighting is applied; update the
block delimiter from ``` to ```bash for the line "export SLURM_CPU_BIND=none"
(i.e., change the code fence around that snippet to include "bash").
- Around line 67-69: The code block containing "srun --cpu-bind=none ..." is
missing a language specifier; update the fenced code block to include a language
(e.g., use ```bash) so the snippet renders with proper syntax highlighting and
readability in the docs.
- Around line 47-50: The code block containing the two environment variable
exports (OMPI_MCA_hwloc_base_binding_policy and OMPI_MCA_rmaps_base_inherit) is
missing a language specifier; update the fenced code block to include a language
(e.g., bash) so syntax highlighting works correctly, i.e., change the opening
fence to include "bash" for the block that contains the two export lines.
🧹 Nitpick comments (1)
docs/source/deployment-guide/configuring-cpu-affinity.md (1)

24-28: Consider clarifying the behavior description.

The description for the <unset> case states that affinity is "cleared if it is constrained." The term "cleared" might be ambiguous—consider using "left unchanged" or "preserved" to clarify that TensorRT-LLM does not modify user-configured affinity in this case.

📝 Suggested clarification
-| <unset>                         | Affinity is auto-configured if it is unconstrained, and cleared if it is constrained by the user and/or environment          |
+| <unset>                         | Affinity is auto-configured if it is unconstrained, and left unchanged if it is constrained by the user and/or environment |
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c99faae and 88b6476.

📒 Files selected for processing (2)
  • docs/source/deployment-guide/configuring-cpu-affinity.md
  • docs/source/index.rst
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{md,rst}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

When documenting CLI commands for TensorRT-LLM tools like trtllm-serve, trtllm-bench, or trtllm-eval, prefer using --config over --extra_llm_api_options for specifying configuration files

Files:

  • docs/source/deployment-guide/configuring-cpu-affinity.md
  • docs/source/index.rst
🧠 Learnings (2)
📓 Common learnings
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • docs/source/deployment-guide/configuring-cpu-affinity.md
🪛 LanguageTool
docs/source/deployment-guide/configuring-cpu-affinity.md

[style] ~9-~9: ‘So as to’ expresses purpose and is used in formal texts. Consider using “to”.
Context: ...s part of a particular TRT-LLM instance so as to minimize latency and maximize bandwidth...

(SO_AS_TO)


[style] ~32-~32: ‘Whether or not’ might be wordy. Consider a shorter alternative.
Context: ... ## Other environmental considerations Whether or not the user chooses to manually configure ...

(EN_WORDINESS_PREMIUM_WHETHER_OR_NOT)


[style] ~33-~33: Try moving the adverb to make the sentence clearer.
Context: ...considerations Whether or not the user chooses to manually configure CPU affinity or have TensorRT-LLM configure it autom...

(SPLIT_INFINITIVE)


[grammar] ~63-~63: Ensure spelling is correct
Context: ...s should be launched accordingly: #### srun The srun parameters should include `--cp...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🪛 markdownlint-cli2 (0.18.1)
docs/source/deployment-guide/configuring-cpu-affinity.md

47-47: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


67-67: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


75-75: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
docs/source/index.rst (1)

28-28: LGTM!

The new CPU affinity documentation page is correctly added to the Deployment Guide toctree with proper indentation and formatting.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +47 to +50
```
export OMPI_MCA_hwloc_base_binding_policy=none
export OMPI_MCA_rmaps_base_inherit=1
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add language specifier to code block.

The code block should specify a language for proper syntax highlighting and improved readability.

♻️ Proposed fix
-```
+```bash
 export OMPI_MCA_hwloc_base_binding_policy=none
 export OMPI_MCA_rmaps_base_inherit=1

</details>

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>

47-47: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In @docs/source/deployment-guide/configuring-cpu-affinity.md around lines 47 -
50, The code block containing the two environment variable exports
(OMPI_MCA_hwloc_base_binding_policy and OMPI_MCA_rmaps_base_inherit) is missing
a language specifier; update the fenced code block to include a language (e.g.,
bash) so syntax highlighting works correctly, i.e., change the opening fence to
include "bash" for the block that contains the two export lines.


</details>

<!-- fingerprinting:phantom:triton:puma -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +67 to +69
```
srun --cpu-bind=none ...
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add language specifier to code block.

The code block should specify a language for proper syntax highlighting and improved readability.

♻️ Proposed fix
-```
+```bash
 srun --cpu-bind=none ...

</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

67-67: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@docs/source/deployment-guide/configuring-cpu-affinity.md` around lines 67 -
69, The code block containing "srun --cpu-bind=none ..." is missing a language
specifier; update the fenced code block to include a language (e.g., use
```bash) so the snippet renders with proper syntax highlighting and readability
in the docs.

Comment on lines +75 to +77
```
export SLURM_CPU_BIND=none
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add language specifier to code block.

The code block should specify a language for proper syntax highlighting and improved readability.

♻️ Proposed fix
-```
+```bash
 export SLURM_CPU_BIND=none

</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

75-75: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@docs/source/deployment-guide/configuring-cpu-affinity.md` around lines 75 -
77, Add a language specifier to the fenced code block containing the environment
export so syntax highlighting is applied; update the block delimiter from ``` to
```bash for the line "export SLURM_CPU_BIND=none" (i.e., change the code fence
around that snippet to include "bash").

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants