Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GGUF BF16 dtype support #2387

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

EricLBuehler
Copy link
Member

Currently, the GgmlDType only supports F16 and not BF16. This PR introduces support for the BF16 type.

I would appreciate a check if this looks good! I have tested with success on my machine which has avx and f16c, and the CUDA tests also pass even though no changes were necessary.

I also noted that there will be a confusing situation in this case, though, if the tensor is part of a QMatMul. In this case (and for all other types not supported for quantized matmul in QStorage), we should perhaps dequantize and then perform the matmul using cublas? This modification could be made in QStorage::fwd, perhaps.

@@ -409,6 +409,7 @@ impl QCudaStorage {
match self.dtype {
GgmlDType::F32 => deq::<f32>(&buffer, block_len, &mut out)?,
GgmlDType::F16 => deq::<half::f16>(&buffer, block_len, &mut out)?,
GgmlDType::BF16 => deq::<half::bf16>(&buffer, block_len, &mut out)?,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With #2424 this can be optimized!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant