-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add graph support #311
Closed
Closed
Add graph support #311
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Update changelog * Bump version number to 0.6.0
for more information, see https://pre-commit.ci
Co-authored-by: Bram Veenboer <[email protected]>
* Make Function::getAttribute const * Add Function::name * Add HostMemory::size * Add DeviceMemory::size * Add Module constructor with CUjit_option map * Update CHANGELOG
* Remove <T> for Wrapper constructors * Update changelog
updates: - [github.com/pre-commit/mirrors-clang-format: v16.0.6 → v17.0.6](pre-commit/mirrors-clang-format@v16.0.6...v17.0.6) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Support size = 0 for DeviceMemory constructor
* Fix cu::HostMemory constructor * Add missing checkCudaCall arround free/unregister calls * Pass the correct pointer to cuMemHostRegister
* Use int for return type of CU_POINTER_ATTRIBUTE_IS_MANAGED query
* Initialize size in Stream::memAllocAsync
This is indeed more accurate. Co-authored-by: Bram Veenboer <[email protected]>
* Update CHANGELOG.md * Add mdformat to pre-commit configuration
* Change Unreleased to 0.7.0 * Cleanup of changelog
* Add nvml::Device::getClock * Update CHANGELOG * Add test
updates: - [github.com/pre-commit/mirrors-clang-format: v18.1.5 → v18.1.8](pre-commit/mirrors-clang-format@v18.1.5...v18.1.8) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Update CHANGELOG * Update version number to 0.8.0
* Changes to incline local includes: - #include must be the start of a line - the inlined included is placed in the same line as the original #include * Update cmake/cudawrappers-helper.cmake Co-authored-by: Bram Veenboer <[email protected]> * Update cmake/cudawrappers-helper.cmake Co-authored-by: Bram Veenboer <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update cmake/cudawrappers-helper.cmake Co-authored-by: Bram Veenboer <[email protected]> * Add updates to inline_local_includes to CHANGELOG * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: Bram Veenboer <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Hanno Spreeuw <[email protected]> Co-authored-by: Leon Oostrum <[email protected]> Co-authored-by: John Romein <[email protected]>
* Add option to create slice of device memory * Add new DeviceMemory constructor to changelog * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Update C++ standard to C++14 * Upgrade to Catch 3.6.0
* include cuda_runtime in cu.hpp --------- Co-authored-by: Bram Veenboer <[email protected]>
* Cleanup test of cu::DeviceMemory * Add DeviceMemory::memset methods + tests * Add Stream::memsetAsync methods + tests
updates: - [github.com/pre-commit/mirrors-clang-format: v18.1.8 → v19.1.1](pre-commit/mirrors-clang-format@v18.1.8...v19.1.1) - [github.com/executablebooks/mdformat: 0.7.17 → 0.7.18](hukkin/mdformat@0.7.17...0.7.18)
updates: - [github.com/pre-commit/mirrors-clang-format: v19.1.1 → v19.1.3](pre-commit/mirrors-clang-format@v19.1.1...v19.1.3)
* Added `cu::Stream::memcpyHtoD2DAsync()`, `cu::Stream::memcpyDtoHD2Async()`, and `cu::Stream::memcpyDtoD2DAsync()` * Added `cu::DeviceMemory::memset2D()` and `cu::Stream::memset2DAsync()` * Added `cufft::FFT1DR2C` and `cufft::FFT1DC2R` * Added `cu::Device::getOrdinal()` * Allow non-managed memory dereferencing in `cu::DeviceMemory` --------- Co-authored-by: Bram Veenboer <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Update cuda-Jimver/toolkit to v0.2.19 * Change CUDA version to 12.6.1
* Remove unused context agument from nvml::Device * Add missing checkNvmlCall * Make nvml::Device functions const * Add pass_filenames option to workaround cppcheck error
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
CUDA Graph API seems a promising addition to the already existing cuda streams. Especially to mix host and device function in a complex order. Moreover, also HIP supports cuda graphs so it feels natural to extend cudawrappers to supports graph.
In this pull request the basic functionality of cuda graph api are introduced.
Clone and verify on a machine with an NVIDIA GPU
Clone and verify on a machine with HIP and AMD GPU