-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add THD format support for Context Parallel #641
Conversation
514b9b6
to
afd7fe1
Compare
Add some custom CUDA kernels to replace the pytorch native op, to make the THD and BSHD have the same performance when using context parallel. |
dea2a00
to
43add29
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
7aea011
to
8ced458
Compare
/te-ci pytorch |
@kunlunl thanks for the PR. Could you fix the DCO and lint errors please? Instructions are in the Details link. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Pending context parallel CI.
@kunlunl could you also please update the pytest version to 7.2 in |
044b028
to
80686b7
Compare
Signed-off-by: kunlunl <[email protected]>
@cyanguwa Both are done. |
/te-ci pytorch |
Make Context Parallel support THD format.
Currently only the Flash Attention Backend is supported as Fused Attention doesn't support THD format.