-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Newcomer questions about the project #397
Comments
#[cube(launch)]
fn arg_runtime(arg: u32) {
}
#[cube(launch)]
fn arg_comptime(#[comptime] arg: u32) {
}
Yes it is planned, but not yet a priority, feel free to submit a PR if you want to work on this. |
Thank you for your reply. |
@lmtss The device doesn't work like in wgpu; it doesn't contain the state or anything like that. It's simply an identifier for where you want to execute your kernels. The client is where you can actually call functions, but the client has a type dependency on the runtime. We could extract the client methods into a trait, passing around something like |
Hi! I think this project is amazing and really cool. I've made an effort to go through the code and examples, but there are still some parts I don't fully understand.
How to pass constant variables at runtime instead of compile-time?
For example, in Vulkan, I would use
push constants
. However, I noticed that the CubeCL wgpu backend seems to not use push constants (it looks like it defaults to passing empty data).But in the
reduce_kernel
example, it seems that scalar parameters can be passed to the kernel. How is this achieved in the backend?How to save the compilation result of a kernel?
I think CubeCL's compilation approach is really cool, but runtime compilation might cause stuttering. Is it possible to cache the compilation results to a file, similar to how
PSO
(Pipeline State Object) caching is commonly done in rendering?The text was updated successfully, but these errors were encountered: