Fix: Add conda CUDA include path for JIT compilation on Linux #221
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix: Add conda CUDA include path for JIT compilation on Linux
Description
This PR fixes a Just-In-Time (JIT) compilation failure that occurs when trying to use
nvdiffraston Linux systems where the CUDA toolkit was installed via thenvidiachannel in a conda environment. This is becoming a common way to manage CUDA dependencies.Problem
The
cuda-toolkitpackage provided by thenvidiaconda channel installs CUDA header files into a non-standard, target-specific directory (e.g.,$CONDA_PREFIX/targets/x86_64-linux/include), rather than the standard$CONDA_PREFIX/include.When
nvdiffrastattempts its JIT compilation usingtorch.utils.cpp_extension.load, it fails to find necessary headers likecuda_runtime_api.hbecause:CPPFLAGS,CXXFLAGS,CUDA_HOME) that are typically used to add include directories, preventing a simple workaround.This results in compilation errors like
fatal error: cuda_runtime_api.h: No such file or directoryorfatal error: crt/host_defines.h: No such file or directory.Solution
This PR modifies the
_get_pluginfunction innvdiffrast/torch/ops.py. It now includes logic to proactively check for this specific conda setup:CONDA_PREFIXenvironment variable is set).platform.system() == 'Linux').$CONDA_PREFIX/targets/x86_64-linux/include).extra_include_pathslist argument passed directly totorch.utils.cpp_extension.load.This directly informs the build system about the correct location of the CUDA headers in this specific environment setup, resolving the compilation failures.
How to Test
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia).cuda-toolkitfrom thenvidiachannel (e.g.,conda install cuda-toolkit -c nvidia). Note: Ensure the version matches or is compatible with the PyTorch CUDA version.nvdiffrast.nvdiffrastfrom the local source (e.g.,pip install .).nvdiffrastJIT compilation (e.g., importingnvdiffrast.torchand using a function likerasterize). The compilation should now succeed without errors.Notes
x86_64-linuxtarget path commonly used by conda on Linux. It might need adaptation for other architectures (e.g.,aarch64) or operating systems (Windows, macOS) if they exhibit similar issues with conda packages placing headers in non-standardtargetsdirectories.