-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[triton-raise-block-ptr]: Increase test coverage #3315
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Tiotto, Ettore <[email protected]>
Signed-off-by: Tiotto, Ettore <[email protected]>
Signed-off-by: Tiotto, Ettore <[email protected]>
%7 = tt.load %6, %5, %cst : tensor<4x!tt.ptr<f32>> | ||
// TODO: replace with the following line when masked loads are supported. | ||
// %7 = tt.load %6, %5, %cst : tensor<4x!tt.ptr<f32>> | ||
%7 = tt.load %6 : tensor<4x!tt.ptr<f32>> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure to understand what is this point of removing the mask here.
As I understand it, the purpose of the test is to test the pass's ability to handle load/store with masks. So, if we remove the mask because masks are not handled yet, not sure it makes sense to keep the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we have 2 choices. XFAIL the test or keep it as it is now and change it later when/if we have masked loads support. I am open to either, what do you prefer?
// CHECK-DAG: [[VAR_1_:%.+]] = tt.make_range {end = 1280 : i32, start = 1024 : i32} : tensor<256xi32> | ||
// CHECK: [[VAR_2_:%.+]] = tt.addptr [[VAR_0_]], [[VAR_1_]] : tensor<256x!tt.ptr<bf16>>, tensor<256xi32> | ||
// CHECK: [[VAR_3_:%.+]] = scf.for [[VAR_arg1_:%.+]] = {{.*}} iter_args([[VAR_arg2_:%.+]] = [[VAR_2_]]) -> (tensor<256x!tt.ptr<bf16>>) { | ||
// CHECK-NOT: tt.make_tensor_ptr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that mean that we do not plan to support case when we have an expandDimOp
in the loop soon-ish?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes correct. I'd like to see how often this patter arises in practice. If it doesn't happen often then we can keep this as a permanent limitation.
No description provided.