How to make small-llama2 model work (Returns nonesense) #10477
Unanswered
CycloneRing
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm testing simple chat sample to use small-llama2.Q4_0.gguf
I tested tinyllama-1.1b-chat-v0.3.Q4_0.gguf and it works fine with this small change :
but it doesn't work with small llama2, Is there any specific thing I need to do?
Beta Was this translation helpful? Give feedback.
All reactions