Skip to content

Comments

Fix global .cuh ignore and enforce tracked CUDA headers#7858

Draft
harshang03 wants to merge 1 commit intodeepspeedai:masterfrom
harshang03:fix/7701-track-cuh-headers
Draft

Fix global .cuh ignore and enforce tracked CUDA headers#7858
harshang03 wants to merge 1 commit intodeepspeedai:masterfrom
harshang03:fix/7701-track-cuh-headers

Conversation

@harshang03
Copy link

Summary

  • remove the global *.cuh ignore rule from .gitignore so CUDA headers are not silently dropped from clones/forks
  • add scripts/check_cuh_ignore.py to enforce that required headers (including csrc/adam/multi_tensor_apply.cuh) stay present and tracked
  • wire the validator into pre-commit and add unit coverage for the new guard

Testing

  • python3 -m unittest tests.unit.test_check_cuh_ignore
  • python3 scripts/check_cuh_ignore.py

Fixes #7701

Remove blanket .cuh ignores, add a repository check that required CUDA headers stay present and tracked, wire it into pre-commit, and cover the validator with unit tests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ignoring *.cuh prevents multi_tensor_apply.cuh from being pushed

1 participant