-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Package management and type piracy: should this be a part of CUDA.jl? #31
Comments
|
Yes, a separate JLL makes sense. But the question is whether a fundamental type piracy can be fixed. Its fundamental because users who define CuSparseMatrixCSR are missing a core action, but there is no error message or warning from CUDA.jl that to fix part of the linear algebra interface to allow the user to know what's going on. Since CUDA.jl defines the rest of the linear algebra interface, leaving off this one function causes downstream usage issues. It's a red flag that clearly should get resolved somehow. Maybe for now it's fine as a separate package while CUDSS is still being developed but why wouldn't it it be a part of CUDA's SparseArray dispatches by default in the long run? Or if not, what's the plan to fix the type piracy? |
I was talking about the LinearSolve's CUDA.jl extension. In the end, it depends on what NVIDIA plans to do with cuDSS. If we don't have "official" sparse factorizations in CUDA, there isn't much we can do in the linear algebra interface of CUDA.jl. |
CUDA.jl is already way to heavyweight, I don't think adding more libraries is a good way forwards. In fact, we've been doing the inverse, moving stuff like cuDNN and cuTENSOR out and restricting CUDA.jl to only what the CUDA toolkit provides. |
What about a separate CUDASparseArrays that is complete with the sparse arrays defintions and solvers? That would make it like SparseArrays which as SuiteSparse to match it?
Extensions cannot have dependencies in Julia v1.10. And if it was added as a requirement to the extension that would break a lot of existing code. |
There are a few issues with the package given its separation from CUDA.jl:
lu
doesn't "just work", when this fixes it to work more like a regular sparse array. I would think users would be better served if there wasn't a special package to add on the side that fixed missing linear algebra functions.So, since this is just the sparse factorization of a sparse matrix defined in CUDA.jl, it's very odd that this one function is not in CUDA.jl.
CC @maleadt
The text was updated successfully, but these errors were encountered: