Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add thrust and cub repos #38

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

jsharpe
Copy link
Member

@jsharpe jsharpe commented Nov 3, 2022

As thrust can also be used via an OpenMP backend, make this available as repository outside of the toolchain.

@jsharpe
Copy link
Member Author

jsharpe commented Nov 3, 2022

I may have missed some includes in thrust - the set here are just from what I've used on another project and so may be missing some for some functionality.
Have also got rules_license rules commented out here; its probably not quite stable enough yet to take a dependency on rules_license but when we do we'll want to add those back in.

@cloudhan
Copy link
Collaborator

cloudhan commented Nov 4, 2022

The problem here is the library is freezed for a single version and cannot be upgraded/downgraded by the user easily. Might better to stick with cuda toolkit bundled version. There are so many libraries and versions there, this should belong to a seperate repo that provides CUDA related libraries recipes, say cuDNN, TensorRT, etc.

@jsharpe
Copy link
Member Author

jsharpe commented Nov 4, 2022

I don't necessarily think it should be a separate repo but could be a separate workspace contained within this repo. Ultimately it'll become a bzlmod registry for all these libraries.

We can add version selection to this quite easily as an argument to the dependencies macro. The motivation for adding this is two-fold; to allow use of the downloadable tarballs for the cuda toolchain and to allow use of thrust / cub in the case that the user has disabled the cuda toolchain. Its perfectly valid to use thrust with an openmp backend and so this allows the user to do this without needing a cuda install to actually be present.

Comment on lines +213 to +230
maybe(
name = "thrust",
repo_rule = http_archive,
build_file = "@rules_cuda//third_party:thrust.BUILD",
sha256 = "d021e37f5aac30fd1b9737865399feb57db8e601ae2fc0af3cd41784435e9523",
strip_prefix = "thrust-1.17.2",
urls = ["https://github.com/NVIDIA/thrust/archive/refs/tags/1.17.2.tar.gz"],
)

maybe(
name = "cub",
repo_rule = http_archive,
build_file = "@rules_cuda//third_party:cub.BUILD",
sha256 = "1013a595794548c359f22c07e1f8c620b97e3a577f7e8496d9407f74566a3e2a",
strip_prefix = "cub-1.17.2",
urls = ["https://github.com/NVIDIA/cub/archive/refs/tags/1.17.2.tar.gz"],
)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are not mandatory, please move them into third_party/ also. Otherwise, our core rules depends on not in use remote files. Lets keep them minimal.

Comment on lines +26 to +37
"cub/*.cuh",
"cub/agent/*.cuh",
"cub/block/*.cuh",
"cub/block/specializations/*.cuh",
"cub/device/*.cuh",
"cub/device/dispatch/*.cuh",
"cub/grid/*.cuh",
"cub/warp/*.cuh",
"cub/warp/specializations/*.cuh",
"cub/iterator/*.cuh",
"cub/thread/*.cuh",
"cub/detail/*.cuh",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to make the glob so detailed? This prone to be affected by upstream implemtation details change. Why not just glob([cub/**/*.cuh]).

filegroup(
name = "include-src",
srcs = glob([
"thrust/*.h",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is redundent.

#)

filegroup(
name = "include-src",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe call them include-hdrs.

#)

filegroup(
name = "include-src",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

@ryanleary
Copy link
Collaborator

Wanted to point out that cub, thrust, and libcudacxx have all been coalesced into https://github.com/NVIDIA/cccl (all three projects remain header-only) recently, so if this integration moves forward, it should use that repository.

Would it make more sense to add build rules to cccl directly? I can reach out to that team - though it may be somewhat onerous to reach full coverage of the various flags/preprocessor directives

@cloudhan
Copy link
Collaborator

The current blocker is merely how do we avoid <dir_of_local_cuda>/cuda/include/{cub,thrust} been exposed to the :cuda_headers,

cc_library(
name = "cuda_headers",
hdrs = [":_cuda_header_files"],
includes = ["cuda/include"],
)

Or more generally, we need a confugrable way to piece together different packages (like in <dir_of_local_cub> and <dir_of_local_nvcc>, etc) to form a unified cuda toolkit directory, and use it as the <dir_of_local_cuda>/cuda. This will be useful for #72 as well

@jsharpe
Copy link
Member Author

jsharpe commented Oct 25, 2023

I think ultimately this will be replaced by bzlmod modules: bazelbuild/bazel-central-registry#1058.
@ryanleary ultimately yes if the CCCL team can own releasing this to the BCR with build rules then that would be the best solution for the community.

@cloudhan
Copy link
Collaborator

@jsharpe We still need to exclude the folder under <toolkit_root>/include/{cub,thrust} to make it clean. Otherwise the deps order might cause incorrect include, similar to #105

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants