-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Handle sm_* which is no longer supported by CUDA / ptxas exec check or configure check? #30
Comments
Newer versions of CUDA no longer support sm_30, and nvptx-tools as currently doesn't handle that gracefully when verifying ( SourceryTools/nvptx-tools#30 ). There's a --no-verify work-around in place in ASM_SPEC, but that one doesn't work when using -Wa,--verify on the command line. Use a more robust workaround: verify using sm_35 when misa=sm_30 is specified (either implicitly or explicitly). Tested on nvptx. gcc/ChangeLog: 2022-03-30 Tom de Vries <[email protected]> * config/nvptx/nvptx.h (ASM_SPEC): Use "-m sm_35" for -misa=sm_30.
…itecture based products is dropped" This resolves #30 "[RFC] Handle sm_* which is no longer supported by CUDA / ptxas exec check or configure check?". Suggested-by: Tom de Vries <[email protected]>
Another thing that we could do: instead of invoking command-line How does the CUDA Driver API differ/relate to the PTX Compiler APIs, https://docs.nvidia.com/cuda/ptx-compiler-api/? |
I think I follow your thinking: using the cuda driver api means you're using a defined api rather than the defacto ptxas interface. Note that what you describe checks what .target values are supported by the CUDA driver api, not the CUDA runtime api (in other words, ptxas and friends). Those are separate things.
Ah, did know that one. Well, after I read the introduction part, mainly this looks like the CUDA runtime api equivalent of the CUDA driver api part that does the ptx compilation. At first glance, it sounds like that part you could use instead of ptxas. But, it's also a recent addition, meaning you'd also require a modern CUDA installation, which kind of could defeat the purpose, or give a very narrow range of CUDAs for which this setup would actually be useful. Note that it's not a good idea to use the driver API because that requires one to have a driver installed, which doesn't work in a scenario where you use say a server class machine to do heavy toolchain and/or apps builds, and then execute on a separate machine with bulky video cards. We don't want to require (for verification purposes) installing the driver on the server class machine. |
..., so that things keep working with CUDA 12.0+.
@tschwinge @vries
From https://gcc.gnu.org/pipermail/gcc-patches/2022-March/591154.html
The text was updated successfully, but these errors were encountered: