Converting exported models to GGUF #706
-
I am trying to convert my models/experiments to GGUF format. I first push checkpoints to hugging face then use instructions from Am I doing something wrong or this is not supported? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Is this happening for all models or only specific ones? It seems this is a known issue for llama3: Best to research directly in But seems like just adding |
Beta Was this translation helpful? Give feedback.
Is this happening for all models or only specific ones?
It seems this is a known issue for llama3:
ggerganov/llama.cpp#6747 (comment)
Best to research directly in
llama.cpp
as it does not seem related to LLM Studio.But seems like just adding
--vocab-type bpe
to the convert script might solve it.