You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm considering the option of deprecating the sparse SNode implementation on Metal, or non-CUDA backends. My reasoning is that:
Product side: To make sparse SNodes beneficial, the simulation scale should be very large, often cannot be done in real-time. This requires a very solid general-purpose compute GPU infra. So far CUDA's ecosystem is probably the most mature in this area. Users who need to run such large-scale simulation would often resort to CUDA, CPU vectorization or HPC. Graphical APIs like Vulkan or OpenGL is hardly an option here.
Product side: Our users tend to use the graphical API's backend for real-time, rendering related tasks.
Technical side: Graphical API's general purpose compute is still not on par with CUDA. By not considering support sparse system on these backends, it could significantly simplify our backend implementation. E.g. it makes unifying all codegen to SPIR-V a lot more achievable.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm considering the option of deprecating the sparse SNode implementation on Metal, or non-CUDA backends. My reasoning is that:
Beta Was this translation helpful? Give feedback.
All reactions