Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[triton-raise-block-ptr]: Increase test coverage #3315

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

etiotto
Copy link
Contributor

@etiotto etiotto commented Jan 30, 2025

No description provided.

@etiotto etiotto self-assigned this Jan 30, 2025
Signed-off-by: Tiotto, Ettore <[email protected]>
@etiotto etiotto marked this pull request as ready for review January 30, 2025 20:31
%7 = tt.load %6, %5, %cst : tensor<4x!tt.ptr<f32>>
// TODO: replace with the following line when masked loads are supported.
// %7 = tt.load %6, %5, %cst : tensor<4x!tt.ptr<f32>>
%7 = tt.load %6 : tensor<4x!tt.ptr<f32>>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure to understand what is this point of removing the mask here.
As I understand it, the purpose of the test is to test the pass's ability to handle load/store with masks. So, if we remove the mask because masks are not handled yet, not sure it makes sense to keep the test.

Copy link
Contributor Author

@etiotto etiotto Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we have 2 choices. XFAIL the test or keep it as it is now and change it later when/if we have masked loads support. I am open to either, what do you prefer?

// CHECK-DAG: [[VAR_1_:%.+]] = tt.make_range {end = 1280 : i32, start = 1024 : i32} : tensor<256xi32>
// CHECK: [[VAR_2_:%.+]] = tt.addptr [[VAR_0_]], [[VAR_1_]] : tensor<256x!tt.ptr<bf16>>, tensor<256xi32>
// CHECK: [[VAR_3_:%.+]] = scf.for [[VAR_arg1_:%.+]] = {{.*}} iter_args([[VAR_arg2_:%.+]] = [[VAR_2_]]) -> (tensor<256x!tt.ptr<bf16>>) {
// CHECK-NOT: tt.make_tensor_ptr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that mean that we do not plan to support case when we have an expandDimOp in the loop soon-ish?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes correct. I'd like to see how often this patter arises in practice. If it doesn't happen often then we can keep this as a permanent limitation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants