-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some simplification patterns can create invalid IR due to type changes #2634
Comments
christopherbate
added a commit
to NVIDIA/TensorRT-Incubator
that referenced
this issue
Dec 4, 2024
Added a patch that was missing from the last StableHLO upgrade. This patch addresses some issues mentioned in openxla/stablehlo#2634. An additional test is added to mlir-tensorrt as a regression test.
christopherbate
added a commit
to NVIDIA/TensorRT-Incubator
that referenced
this issue
Dec 5, 2024
Added a patch that was missing from the last StableHLO upgrade. This patch addresses some issues mentioned in openxla/stablehlo#2634. An additional test is added to mlir-tensorrt as a regression test. Also makes some minor cleanup to CI/CD config in order to fix the caching mechanism, using the PR as a test-case.
christopherbate
added a commit
to NVIDIA/TensorRT-Incubator
that referenced
this issue
Dec 5, 2024
Added a patch that was missing from the last StableHLO upgrade. This patch addresses some issues mentioned in openxla/stablehlo#2634. An additional test is added to mlir-tensorrt as a regression test. Also makes some minor cleanup to CI/CD config in order to fix the caching mechanism, using the PR as a test-case.
christopherbate
added a commit
to NVIDIA/TensorRT-Incubator
that referenced
this issue
Dec 5, 2024
Added a patch that was missing from the last StableHLO upgrade. This patch addresses some issues mentioned in openxla/stablehlo#2634. An additional test is added to mlir-tensorrt as a regression test. Also makes some minor cleanup to CI/CD config in order to fix the caching mechanism, using the PR as a test-case.
christopherbate
added a commit
to NVIDIA/TensorRT-Incubator
that referenced
this issue
Dec 5, 2024
Added a patch that was missing from the last StableHLO upgrade. This patch addresses some issues mentioned in openxla/stablehlo#2634. An additional test is added to mlir-tensorrt as a regression test.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened?
There are several places in the simplification transforms (e.g. https://github.com/openxla/stablehlo/blob/main/stablehlo/transforms/StablehloAggressiveFolder.cpp#L744) where a rewrite pattern unconditionally replaces a Value with a Value of a potentially different type. Sometimes this is OK, sometimes not.
In this IR, folding the
stablehlo.subtract
and replacing it directly with a constant of typetensor<1x1xi32>
is OK:However, if the user of
%0
is areturn
or something likestablehlo.composite
, then the result is invalid IR.It seems like it's hit-or-miss whether patterns are checking and tested against edge cases similar to the above.
Rather than submitting PRs piecemeal each time we encounter a failing pattern, I'm wondering if there's something we can do across the board to address this issue in all the simplification passes.
The text was updated successfully, but these errors were encountered: